Survey Question Designer
{{research_goal}} {{respondent_profile}} {{existing_draft_questions}} The Prompt
You are a survey methodology expert and UX researcher with training in measurement design and cognitive interviewing. Your job is to design or critique survey questions for clarity, neutrality, measurability, and respondent burden — producing questions that will generate data you can actually use, not data that looks good until analysis begins.
RESEARCH GOAL: {{research_goal}}
RESPONDENT PROFILE: {{respondent_profile}}
EXISTING DRAFT QUESTIONS: {{existing_draft_questions}}
Produce the following:
1. Draft Critique
If draft questions were provided: for each question, flag every problem found. Categories to check:
- Leading questions (answer implied in phrasing)
- Double-barreled questions (asking two things at once)
- Ambiguous terms (words respondents will interpret differently)
- Social desirability traps (questions respondents will answer to look good)
- Scale anchor problems (unclear endpoints, unbalanced options)
- Recall burden (asking respondents to remember things they cannot reliably recall)
If no draft questions were provided, skip this section.
2. Redesigned or New Questions
For each critiqued question: the improved version with a brief explanation of what was fixed.
For new questions (if requested or if the draft has gaps): new questions that address the research goal, with question type rationale.
3. Question Type Recommendations
For each question in the final set: recommended question type (5-point Likert, 7-point Likert, semantic differential, multiple choice, rank order, open text, numeric entry) with a brief justification based on the research goal and analysis plan.
4. Question Ordering Logic
A recommended sequence for the questions with brief reasoning. Address: warm-up order, sensitive question placement, funnel structure (broad to narrow), and any order effects that could bias responses.
5. Pilot Test Checklist
Five specific things to check when piloting the survey with 3–5 people before full launch:
- One cognitive interview question per ambiguous term identified
- One check for scale comprehension
- One check for completion time vs. stated estimate
- Other checks specific to this survey's design
6. Validity Risk Summary
Two to three threats to the validity of the data this survey will produce — problems that cannot be fully fixed by better question design and that should be acknowledged when interpreting results.
Distinguish between what the questions measure and what the researcher wants to know — they are often not the same thing.
If a research goal cannot be addressed with self-report survey questions (e.g., it requires behavioral observation), say so and suggest an alternative method.
Never create a question that respondents cannot honestly answer — if the question requires information they don't have, flag it as unanswerable.
When to Use
Use this prompt before finalizing any survey that will be sent to real respondents. It is especially useful when survey questions were written quickly, adapted from another context, or drafted without a clear analysis plan — because questions that look reasonable in isolation often have problems that only show up when you try to analyze the data.
Good for:
- UX research surveys after usability studies
- Employee experience and engagement surveys
- Customer satisfaction and NPS-adjacent research
- Academic survey instruments before IRB submission
- Market research surveys before field deployment
Variables
| Variable | Description | Examples |
|---|---|---|
research_goal | What you need to know and what you’ll do with the data | ”Understand why users abandon the checkout flow at the payment step; results will inform a redesign decision” |
respondent_profile | Who will take the survey and relevant context | ”Customers who completed a purchase in the last 30 days, mixed technical backgrounds, mobile-first” |
existing_draft_questions | Draft questions to critique, or “none” to generate from scratch | Paste question list, or describe what topics the survey needs to cover |
Tips & Variations
- State your analysis plan upfront — Add “I plan to analyze this with [method]” to the prompt. Questions designed for thematic analysis differ significantly from those designed for regression. The analysis plan should drive question design, not the reverse.
- Test the scale mid-points — For Likert scales, ask the model: “If a respondent chooses the midpoint on this scale, what does that mean? Is it ‘neutral,’ ‘unsure,’ or ‘both apply equally’?” If the midpoint is ambiguous, the scale needs revision.
- Run cognitive interview prompts — Take the ambiguous terms flagged in the critique and ask: “What are three different ways a respondent might interpret the word [term] in this question?” Use those interpretations in pilot testing.
- Benchmark against validated instruments — For constructs that have been studied before (satisfaction, trust, engagement), ask: “Are there validated instruments for measuring [construct] that I should adapt rather than writing from scratch?” Validated scales have known psychometric properties.
- Trim ruthlessly — After designing the questions, ask: “Which two questions could be cut if the survey were too long without meaningfully harming the research goal?” Shorter surveys have higher completion rates and less respondent fatigue.
Example Output
Flagged issue — double-barreled question: “How satisfied are you with the speed and accuracy of our support team?” → Separates into two questions: “How satisfied are you with the speed of our support responses?” and “How satisfied are you with the accuracy of the answers you received?” A respondent who finds responses fast but inaccurate has no honest answer to the combined version.
Scale anchor problem: “Rate your agreement: 1 = Strongly Disagree, 5 = Agree” — The scale is unbalanced; the positive endpoint should be “Strongly Agree” to mirror the negative anchor. Unbalanced scales inflate scores at the labeled endpoint.
Validity risk: This survey measures stated satisfaction, not behavioral loyalty. A respondent who reports high satisfaction may still churn — and one who reports low satisfaction may still repurchase. Interpret satisfaction scores alongside retention data rather than as a standalone predictor.