The Filter-and-Regenerate Loop
TRIGGER
AI-generated content needs human review before user-facing deployment, but the volume (millions of items) makes comprehensive review impossible—yet releasing unreviewed AI content risks inappropriate or off-brand output.
APPROACH
Canva generated over 1 million personalized poems across 9 locales. Input: generated poems + sensitivity wordlist + tone classification prompt. Output: approved poem set with flagged items regenerated. Review process: (1) Localization team reviewed samples of non-English poems to fine-tune prompts, (2) automated flagging of poems containing potentially sensitive words, (3) generative AI to identify poems with negative tone. Flagged poems were regenerated and re-reviewed in cycles until acceptable alternatives existed.
PATTERN
“Each regeneration cycle reduces your review surface exponentially. Regeneration is cheap—instead of fixing flagged content or reviewing everything manually, automatically regenerate flagged items and re-run the filter.”
✓ WORKS WHEN
- Regeneration cost is low relative to manual review cost
- Content quality issues are detectable via automated signals (sentiment, keyword lists, pattern matching)
- Output variability means regeneration will likely produce different (better) results
- Strict quality requirements mean even low-probability failures are unacceptable
✗ FAILS WHEN
- Regeneration is expensive (complex prompts, long outputs, rate limits)
- Quality issues are subtle and require human judgment to detect
- Output is deterministic—regeneration produces the same problematic content
- Volume is low enough that comprehensive human review is feasible