Summary
AI tools like ChatGPT have exploded in popularity with good reason: they allow users to draft, summarize, and edit content with unprecedented speed. While these generic tools can generate any type of content or perform any type of content task, the user needs to craft an effective prompt to get high-quality output, and often needs to exchange multiple messages with additional guidance and requirements in order to improve results. When you’re building an AI-powered text generation feature, such as a product description or email writer, you typically can’t expect users to craft their own prompts. And unless you’re building a chat interface, you’re unlikely to offer the ability to iteratively improve the output. Instead, your feature needs a robust prompt skeleton that combines with user input to produce high-quality output in a single response. For the designer, this means building an interface that helps users provide the exact information that creates a successful prompt. This process is more complex than simple form design or a mad-lib prompt completion tool. The user input, often including free form text fields, might be required to fill in prompt variables, but it also could change the prompt structure itself, or even override base instructions. The effectiveness of the user input significantly influences the quality of the output, underscoring the need for designers to be deeply familiar with the backend prompt architecture so they can design the frontend. Drawing on recent text generation projects, I'll demonstrate how the interface design can respond to and evolve with the prompt architecture. I’ll talk about how to determine which prompt components to make invisible to the user, which to provide as predefined options, and which should be authored by the user in free-form text fields. Takeaways How prompt structure can impact user interface design and conversely, how design can impact prompt structure Techniques to provide effective user guidance within AI generation contexts to ensure consistently high-quality output Real-world examples and learnings from recent generative AI projects in an e-commerce software product
Key Insights
-
•
Building specialized AI features requires predefining most of the prompt to guide the LLM effectively rather than having users write their own prompts.
-
•
Users refine prompts iteratively in chat interfaces, but specialized tools often allow only one shot with limited input variables.
-
•
Understanding the backend prompt structure is critical for designers to decide which parts are fixed and which are configurable by users.
-
•
Tone of voice is a complex variable that significantly impacts product description differentiation and brand personality.
-
•
Free-text inputs for tone proved difficult for users; predefined tone options simplify user choices while maintaining quality.
-
•
AI models inconsistently detect tone from existing text samples and report high confidence in contradictory results, making this approach unreliable.
-
•
Successful tone options must be sufficiently distinct to allow users to easily identify and select the best fit for their brand.
-
•
Embedding detailed tone attribute descriptions (e.g., vocabulary, pronouns, punctuation) in prompts improves AI output quality.
-
•
The default confident and positive tone of models like ChatGPT may not suit all contexts and needs to be explicitly adjusted in prompts.
-
•
Providing a minimal, intuitive UI that maps to rich, complex backend instructions balances user simplicity with output quality.
Notable Quotes
"In a chat interface, you’re able to fill in the information that you missed and iterate on the output as it’s being generated."
"When you’re building a specific AI feature, you don’t want to make your user write their own prompt."
"It becomes the designer’s responsibility to figure out what aspects of the instructions to the LLM should be configurable and what should be fixed."
"A prompt is just a set of instructions to an AI in the context of text generation."
"People get stuck during the sessions when asked to describe their brand voice in just one or two words."
"The LLM gave many different answers for the same passage, each time reporting it was one hundred percent confident."
"Tones need to be distinct enough so that merchants can spot the tone that best fits their brand."
"Even though we’ve got a single dropdown field, it maps to a far richer set of instructions in the prompt."
"The confident tone built into these tools may be one of the more problematic features of Gen AI."
"By default, you’re getting I am very confident of my answers."
Or choose a question:
More Videos
"We often say don’t make me think. When that’s not possible, reuse and recycle those learnings."
Sam ProulxOnline Shopping: Designing an Accessible Experience
June 7, 2023
"The career path shows when someone is ready for promotion, but additional training may be needed to support leadership transitions."
Ignacio MartinezFair and Effective Designer Evaluation
September 25, 2024
"Embedding designers gave them deep expertise but siloed goals prevented consistent cross-product experiences."
Sarah Kinkade Mariana Ortiz-ReyesDesign Management Models in the Face of Transformation
June 8, 2022
"We as humans value warmth information in others more than competence information."
Daniel GloydWarming the User Experience: Lessons from America's first and most radical human-centered designers
May 9, 2024
"AI systems can present speculative connections as established facts, so confidence ratings are critical."
Patrick BoehlerFishing for Real Needs: Reimagining Journalism Needs with AI
June 10, 2025
"Anyone can do a basic AI summary with zero control, but we offer robust, evidence-backed, scalable chat analysis."
Andy Barraclough Betsy NelsonFrom Costly Complexity to Efficient Insights: Why UX Teams Are Switching To Voxpopme
September 23, 2024
"We should reconsider the savior complex designers sometimes have — we’re not all powerful."
Alexandra SchmidtWhy Ethics Can't Save Tech
November 18, 2022
"Strategic friction is about where you apply friction deliberately to create insight and transformation."
Louis RosenfeldDiscussion: What Operations can teach DesignOps
November 6, 2017
"AI decision-making on behalf of billions of people isn’t just scary; it’s terrifying and dangerous, so ethics must be embedded."
Mitchell BernsteinOrganizing Chaos: How IBM is Defining Design Systems with Sketch for an Ever-Changing AI Landscape
September 29, 2021