What Linear Learned About AI Transparency and User Trust
TRIGGER
Frontier models running multi-step reasoning take seconds rather than the sub-200ms users expect from Linear's deterministic systems. Without visibility into what's happening, users perceive the system as frozen or broken, undermining trust in the results.
APPROACH
Linear designed a dedicated Triage Intelligence module that displays the model's 'thinking state' along with a timer at the bottom. Input: multi-step AI reasoning operation. Output: visible thinking state with elapsed time and reasoning trace. Hovering over suggestions reveals reasoning in plain language with alternatives. A full thinking panel shows the complete trace: context pulled in, decisions made, and how user guidance shaped outcomes. The team explicitly distinguished AI suggestions from human-set metadata visually, so users always know the source.
PATTERN
“Users will wait 30 seconds for AI they can see working, but abandon after 5 seconds of a spinner. When reasoning takes seconds, the UX problem isn't speed—it's uncertainty. A visible thinking state with elapsed time transforms waiting from anxiety into anticipation, converting latency from a cost into a transparency feature.”
✓ WORKS WHEN
- AI operations take 2-30 seconds—long enough to notice, short enough to wait
- Users will act on the AI output and need to trust it (triage, approval, routing decisions)
- The AI performs multi-step reasoning that can be meaningfully narrated (not opaque neural inference)
- Speed-focused product culture where any slowness requires explicit justification
- Users have option to intervene or provide guidance that affects the outcome
✗ FAILS WHEN
- Operations complete in under 500ms—adding thinking UI creates unnecessary cognitive load
- AI runs in background/async where users don't wait for results
- Reasoning steps are not human-interpretable or would confuse rather than build trust
- Users are performing rapid, repetitive actions where any interstitial slows flow
- The AI is advisory-only and users don't need to validate its reasoning