Build

Data Sync Patterns

Use these patterns to move deals, CRM records, and related objects into your own warehouse or application safely.

Updated March 2026

Sync Modes

Backfill Pattern

  1. Start with the smallest set of fields that proves the pipeline works.
  2. Use cursor pagination where available.
  3. Persist both the downstream write result and the last successful cursor checkpoint.
  4. Expand the field set only after the happy path is stable.

Incremental Pattern

Use an updated_at or similar time-based filter together with a persisted checkpoint:

  • Store the timestamp of the last fully successful run.
  • Re-read a small overlap window to tolerate clock skew and delayed writes.
  • Deduplicate in your destination using resource IDs.
Do not treat offset pagination as a sync primitive

Offset pagination is appropriate for sorted browsing, not for durable bulk syncs. For production data movement, prefer cursor pagination whenever the endpoint supports it.

Operational Checklist

  • Monitor request_id values for failed batches.
  • Alert on repeated 401, 403, and 429 responses.
  • Keep writes idempotent in your destination so replaying a batch is safe.
  • Version your destination schema deliberately as Lev fields expand.
Next steps
More in this section