Summary
AI tools like ChatGPT have exploded in popularity with good reason: they allow users to draft, summarize, and edit content with unprecedented speed. While these generic tools can generate any type of content or perform any type of content task, the user needs to craft an effective prompt to get high-quality output, and often needs to exchange multiple messages with additional guidance and requirements in order to improve results. When you’re building an AI-powered text generation feature, such as a product description or email writer, you typically can’t expect users to craft their own prompts. And unless you’re building a chat interface, you’re unlikely to offer the ability to iteratively improve the output. Instead, your feature needs a robust prompt skeleton that combines with user input to produce high-quality output in a single response. For the designer, this means building an interface that helps users provide the exact information that creates a successful prompt. This process is more complex than simple form design or a mad-lib prompt completion tool. The user input, often including free form text fields, might be required to fill in prompt variables, but it also could change the prompt structure itself, or even override base instructions. The effectiveness of the user input significantly influences the quality of the output, underscoring the need for designers to be deeply familiar with the backend prompt architecture so they can design the frontend. Drawing on recent text generation projects, I'll demonstrate how the interface design can respond to and evolve with the prompt architecture. I’ll talk about how to determine which prompt components to make invisible to the user, which to provide as predefined options, and which should be authored by the user in free-form text fields. Takeaways How prompt structure can impact user interface design and conversely, how design can impact prompt structure Techniques to provide effective user guidance within AI generation contexts to ensure consistently high-quality output Real-world examples and learnings from recent generative AI projects in an e-commerce software product
Key Insights
-
•
Building specialized AI features requires predefining most of the prompt to guide the LLM effectively rather than having users write their own prompts.
-
•
Users refine prompts iteratively in chat interfaces, but specialized tools often allow only one shot with limited input variables.
-
•
Understanding the backend prompt structure is critical for designers to decide which parts are fixed and which are configurable by users.
-
•
Tone of voice is a complex variable that significantly impacts product description differentiation and brand personality.
-
•
Free-text inputs for tone proved difficult for users; predefined tone options simplify user choices while maintaining quality.
-
•
AI models inconsistently detect tone from existing text samples and report high confidence in contradictory results, making this approach unreliable.
-
•
Successful tone options must be sufficiently distinct to allow users to easily identify and select the best fit for their brand.
-
•
Embedding detailed tone attribute descriptions (e.g., vocabulary, pronouns, punctuation) in prompts improves AI output quality.
-
•
The default confident and positive tone of models like ChatGPT may not suit all contexts and needs to be explicitly adjusted in prompts.
-
•
Providing a minimal, intuitive UI that maps to rich, complex backend instructions balances user simplicity with output quality.
Notable Quotes
"In a chat interface, you’re able to fill in the information that you missed and iterate on the output as it’s being generated."
"When you’re building a specific AI feature, you don’t want to make your user write their own prompt."
"It becomes the designer’s responsibility to figure out what aspects of the instructions to the LLM should be configurable and what should be fixed."
"A prompt is just a set of instructions to an AI in the context of text generation."
"People get stuck during the sessions when asked to describe their brand voice in just one or two words."
"The LLM gave many different answers for the same passage, each time reporting it was one hundred percent confident."
"Tones need to be distinct enough so that merchants can spot the tone that best fits their brand."
"Even though we’ve got a single dropdown field, it maps to a far richer set of instructions in the prompt."
"The confident tone built into these tools may be one of the more problematic features of Gen AI."
"By default, you’re getting I am very confident of my answers."
Dig deeper—ask the Rosenbot:
















More Videos

"Students cannot be taught what they need to know; they can only be coached to absorb it through demonstration."
Yoel SumitroActions and Reflections: Bridging the Skills Gap among Researchers
March 9, 2022

"Burnout exists because we made rest a reward rather than a right."
Zariah CameronReDesigning Wellbeing for Equitable Care in the Workplace
September 23, 2024

"In 2023, design leaders were being restructured out and design relegated to a tactical, downstream function."
Doug PowellDesignOps and the Next Frontier: Leading Through Unpredictable Change
September 11, 2025

"Reviewing decisions out loud held everyone accountable and made it harder for Jeff to forget or change his mind."
Darian DavisLessons from a Toxic Work Relationship
January 8, 2024

"To innovate smarter, you need to get access to the roadmap as early as possible and start research even when not asked for it."
Mike OrenWhy Pharmaceutical's Research Model Should Replace Design Thinking
March 28, 2023

"Design Ops needs to be flexible and change as the organization and culture evolve."
Jacqui FreyScale is Social Work
March 19, 2020

"The V2MOM stands for Vision, Values, Methods, Obstacles, and Measures and is a framework every Salesforce employee uses."
Kim Holt Emma Wylds Pearl Koppenhaver Maisee XiongA Salesforce Panel Discussion on Values-Driven DesignOps
September 8, 2022

"It can cost up to 200 times more to remediate and retrofit a solution as opposed to designing it correctly in the first place."
Samuel ProulxFrom Standards to Innovation: Why Inclusive Design Wins
September 10, 2025

"Soft skills doesn’t always represent active listening, so we updated to be more specific."
Laine Riley ProkayHow DesignOps can Drive Inclusive Career Ladders for All
September 30, 2021