What Notion Learned About Why Coding Agents Outperform Knowledge Work
TRIGGER
AI coding agents achieved 30-40× productivity multipliers while agents for general knowledge work remained stuck in narrow use-cases, despite using the same underlying models.
APPROACH
Analysis of why programming agents (orchestrating 3-4 agents simultaneously, queuing tasks before lunch or bed) succeeded where other knowledge work agents struggled. The key differentiator: coding tools and context live in one place (IDE, repo, terminal), while general knowledge work is scattered across Slack threads, strategy docs, dashboards, and institutional memory in people's heads. Humans currently serve as the glue, stitching context together with copy-paste and tab-switching.
PATTERN
“Same models, same prompts, wildly different results. Coding agents succeed because code lives in one place. Knowledge work fails because context is scattered across Slack, docs, and people's heads. Consolidate context first, deploy agents second.”
✓ WORKS WHEN
- Domain has natural context consolidation points (IDE for code, unified workspace for documents)
- Relevant information can be accessed programmatically without human mediation
- Institutional knowledge is documented rather than existing only in people's heads
- Integration APIs exist for the tools where work context lives
- Work artifacts are machine-readable (text, structured data) rather than requiring interpretation
✗ FAILS WHEN
- Critical context lives in undocumented tribal knowledge or requires relationship context
- Work requires real-time information from systems without API access
- Context is spread across 10+ tools with no integration layer
- Key decisions depend on reading between the lines of informal communication
- Domain knowledge is tacit and hasn't been externalized into queryable form