The Reviewer Ignorance Advantage
TRIGGER
Single AI coding agents accumulate context bias during implementation—they become invested in their own approach and miss issues that fresh eyes would catch, similar to how human developers benefit from code review by colleagues who weren't involved in writing the code.
APPROACH
Anthropic engineers use separate Claude instances with isolated contexts for writing vs reviewing code. Input: code written by Claude #1 in one context. Output: review feedback from Claude #2 in a fresh context, then edits from Claude #3 that sees both. Workflow: (1) Have Claude #1 write code, (2) Run /clear or start Claude #2 in another terminal, (3) Have Claude #2 review Claude #1's work without seeing the implementation journey, (4) Start Claude #3 to read both code and review feedback, (5) Have Claude #3 edit code based on feedback. For test-driven workflows: one Claude writes tests, another writes code to pass them. Instances can communicate via separate working scratchpads—each told which file to write to and which to read from.
PATTERN
“The same Claude instance that wrote code will defend its flawed approach rather than catch obvious issues—accumulated context creates bias toward sunk-cost solutions. For verification tasks, the reviewer's ignorance of implementation decisions is a feature: use separate instances with isolated contexts.”
✓ WORKS WHEN
- Task has clear verification criteria (tests pass, review checklist, visual match)
- Implementation involves judgment calls where a second opinion adds value
- You can afford 2-3x the token cost for multi-instance workflows
- Tasks are complex enough that accumulated context could bias toward sunk-cost solutions
✗ FAILS WHEN
- Simple one-shot tasks where review overhead exceeds implementation time
- Tight coupling between implementation and verification requires shared context (debugging sessions)
- Review criteria are subjective and inconsistent between instances
- Latency constraints prevent sequential write-then-review workflows