How Notion Treats Agent Instructions as Data
TRIGGER
AI agents need to follow different investigation procedures depending on alert type, but encoding all procedures into a single prompt creates maintenance burden and limits non-engineers from updating workflows.
APPROACH
Notion's DART team structured runbooks as pages in a database where each page title matches an alert type. Input: alert with type classification. Output: runbook-specific investigation following detailed instructions. When an alert fires, Scruff looks up the matching runbook by alert type and follows those specific instructions (e.g., which logs to search, what misconfigurations to check). The team can update runbooks as normal Notion documents without touching agent code or prompts, enabling 'easy updates without external dependencies.'
PATTERN
“Your domain experts can update agent behavior without filing engineering tickets—if you treat instructions as data. When procedures live in a queryable database indexed by task type, anyone who can edit a doc can modify agent behavior. The agent becomes a procedure executor rather than a procedure knower.”
✓ WORKS WHEN
- Tasks can be cleanly categorized into types that map to distinct procedures
- Domain experts (not just engineers) need to update procedures frequently
- Procedures are detailed enough that lookup overhead is justified (multi-step workflows)
- Alert/task taxonomy is stable enough that runbook titles reliably match incoming work
- Organization already maintains runbooks or SOPs that can be migrated
✗ FAILS WHEN
- Tasks don't fit clean categories or require blending multiple procedures
- Procedures change faster than the database can be updated (real-time adaptation needed)
- Runbook instructions require interpretation rather than literal execution
- Number of task types exceeds what's practical to maintain as separate documents (100+ types)
- Procedures involve judgment calls that can't be written as explicit steps