← Back to patterns
build

Atlassian's Component Calibration Loop

TRIGGER

Screenshot-to-code conversion was generating incorrect components because the AI misidentified custom design system elements—labeling a proprietary button variant as a generic link, or confusing internal icon sets with standard libraries.

APPROACH

Atlassian's team implemented a calibration step before using AI for screenshot-to-code: they feed design elements to the AI and ask it what it sees. If the AI correctly identifies a component (e.g., 'primary button with icon'), they proceed. If it misidentifies (e.g., calls their custom toggle a checkbox), they correct the label and retrain/adjust prompts. This creates a feedback loop that improves recognition accuracy over time. Input: individual design system component image. Output: AI's identification label, which is validated and corrected by designers.

PATTERN

Your custom toggle becomes a checkbox, your proprietary button becomes a generic link—training data assumptions create systematic misclassifications invisible until you probe for them. Feed design elements to the AI and ask what it sees before building downstream logic.

WORKS WHEN

  • Design system contains custom components that visually differ from standard UI libraries
  • Screenshot-to-code is a regular workflow step, not occasional experimentation
  • Team can invest upfront calibration time (hours) to avoid repeated correction time (minutes per instance, but frequent)
  • AI tool allows correction feedback that persists across sessions

FAILS WHEN

  • Design system uses standard, well-documented component libraries that AI already recognizes accurately
  • Screenshots are low-fidelity wireframes where component identity is ambiguous even to humans
  • One-time conversion tasks where calibration overhead exceeds manual review time
  • AI tool doesn't support persistent correction or fine-tuning

Stage

build

From

January 2026

Want patterns like this in your inbox?

3 patterns weekly. No fluff.