← Back to patterns
build

Linear's Reasoning Trace Pattern for Agent Trust

TRIGGER

Users new to coding agents don't know whether to trust agent output—they can't tell if the agent reasoned carefully or got lucky, making it hard to calibrate when to accept output versus when to review carefully.

APPROACH

Cursor exposed agent 'thoughts' and 'tools' through Linear's API, showing the internal reasoning state as the agent progresses. Users can see the decision-making process, not just the final output. Linear's API was designed with explicit support for this transparency ('thoughts' and 'tools' fields), which mapped directly to Cursor's agent model. Users can also open Cursor directly to explore the code, see what the agent did, and give feedback on the approach.

PATTERN

Black-box outputs leave users guessing whether to accept or scrutinize each result. Exposing the reasoning trace—not just a confidence score—lets users calibrate trust by seeing how the agent decided, not just what it concluded.

WORKS WHEN

  • Users will actually look at intermediate state (developers reviewing code changes, not batch processing)
  • Agent reasoning is interpretable enough that exposed state provides signal (not just token-level attention)
  • The system can surface internal state without overwhelming users with irrelevant detail
  • Users are building mental models of agent capabilities and need feedback to calibrate expectations

FAILS WHEN

  • Users want fire-and-forget automation and won't inspect process
  • Agent reasoning is too opaque or technical for target users to interpret
  • Exposing internal state creates security or IP concerns
  • Volume is too high for any human to review individual agent reasoning (thousands of daily tasks)

Stage

build

From

August 2025

Want patterns like this in your inbox?

3 patterns weekly. No fluff.