The Sunk-Cost Design Trap
TRIGGER
Design exploration phases get compressed because generating multiple directions is time-expensive—teams converge prematurely on a single approach without fully exploring the solution space. PMs need to form opinions on ill-defined problem spaces but don't know where to start.
APPROACH
Pendo's Senior Staff Product Designer Brian Greenbaum used Figma Make to explore a conversational UI for their Agent Mode feature, creating an interface that iterated on how reasoning messages and tool call responses would stream in. Input: exploratory prompts with parameter variations (timing, density, layout options). Output: multiple interactive prototypes with toggleable variations that externalize thinking and reveal scalability constraints before significant investment. He added a drop-down menu to toggle between variations: minimal vs. detailed views, single-line vs. stacked text, and different loading indicators. ServiceNow's Guy Meyer validated AI agent dashboard concepts within five minutes, discovering that an AI agents matrix wouldn't scale to thousands of systems and identifying where custom solutions were needed. LinkedIn's Giuliano Manno, director of design systems, describes using Figma Make to stretch the divergent vertices of the Double Diamond: generating artifacts in five minutes that help ideas click for engineers and product managers.
PATTERN
“You'll abandon a bad idea in 5 minutes but defend one that took 3 days—AI breaks this sunk-cost trap. When generating a direction takes minutes instead of days, teams explore more broadly and abandon bad ideas faster. The blank canvas is the enemy; AI-generated options become thinking scaffolds that reveal your own constraints and preferences.”
✓ WORKS WHEN
- Problem space has multiple valid solutions worth exploring or 'too many unknowns'
- Team tends to converge too early due to time pressure
- Variations can be expressed as parameter changes (timing, density, layout)
- Feedback loops with stakeholders are short enough to test multiple directions
- Design system provides enough constraints for useful variation generation
- PM needs to form opinion quickly to participate in already-moving discussions
- Low-fidelity exploration is acceptable (not yet at refinement stage)
✗ FAILS WHEN
- Problem has a clearly optimal solution that exploration won't improve
- Variations require deep domain expertise the AI lacks
- Stakeholders can't evaluate multiple options and need a single recommendation
- Generated variations are superficially different but structurally identical
- Time spent prompting and reviewing variations exceeds manual design time
- Problem is well-understood and team needs execution, not more exploration
- Stakeholders interpret any artifact as a proposal, creating premature commitment pressure