How Ramp Fixed Vague Prompts with AI Self-Interrogation
TRIGGER
Users were getting mediocre results from AI because they inherited search behavior—typing vague queries and expecting useful outputs. The gap between what users asked for and what they actually needed was invisible to them.
APPROACH
Ramp trained employees to use a meta-prompting loop: Start with an initial prompt but don't execute it. Instead, ask AI to generate clarifying questions that would help it produce better output. Answer those questions, then have AI rewrite the original prompt incorporating the new context. Repeat until satisfied, then execute. This became daily practice across 90% of their 1,200 employees using Notion AI monthly.
PATTERN
“Users inherited search behavior—vague queries expecting useful outputs—and LLMs confidently fill ambiguity gaps with hallucinated assumptions. Have AI generate clarifying questions before execution; the questions reveal what users actually need before wasting a generation on what they thought they wanted.”
✓ WORKS WHEN
- Users are domain experts who have implicit context they don't realize they're omitting
- Tasks are complex enough that the first attempt rarely succeeds (multi-step analysis, content generation with specific constraints)
- Organization can invest in changing user behavior through training and practice
- AI tool supports conversational refinement without losing context
✗ FAILS WHEN
- Tasks are simple lookups or single-step operations where iteration adds friction without value
- Users lack the domain knowledge to answer clarifying questions meaningfully
- Latency requirements don't allow for multiple round-trips before execution
- The AI's clarifying questions are generic rather than domain-specific