Ship data syncs that survive production

Last updated: March 2026

The safest Lev syncs treat pagination, ordering, and checkpoints as product requirements. Use this guide when you are moving deals, placements, contacts, or companies into your own system.

Sync Modes

Choose the sync mode deliberately

Backfill
Use cursor pagination to walk the full dataset deterministically when you are loading historical records.
Incremental
Use filters like updated_at together with stored checkpoints to pull only what changed since the last successful run.
On-demand refresh
Use resource-specific reads when a user opens a detail view and needs the freshest known state.

Backfill Pattern

  1. Start with the smallest set of fields that proves the pipeline works.
  2. Use cursor pagination where available.
  3. Persist both the downstream write result and the last successful cursor checkpoint.
  4. Expand the field set only after the happy path is stable.

Backfill pseudocode

typescript
let cursor: string | null = null

while (true) {
const url = new URL("https://api.levcapital.com/api/external/v2/deals")
url.searchParams.set("limit", "100")
if (cursor) url.searchParams.set("cursor", cursor)

const response = await fetch(url, {
  headers: {
    Authorization: "Bearer YOUR_API_KEY",
    "X-Origin-App": "warehouse-sync",
  },
})

const payload = await response.json()
await writeBatch(payload.data)

if (!payload.pagination?.has_more) break
cursor = payload.pagination.next_cursor
}

Incremental Pattern

Use an updated_at or similar time-based filter together with a persisted checkpoint:
  • Store the timestamp of the last fully successful run.
  • Re-read a small overlap window to tolerate clock skew and delayed writes.
  • Deduplicate in your destination using resource IDs.
Do not treat offset pagination as a sync primitive
Offset pagination is appropriate for sorted browsing, not for durable bulk syncs. For production data movement, prefer cursor pagination whenever the endpoint supports it.

Operational Checklist

  • Monitor request_id values for failed batches.
  • Alert on repeated 401, 403, and 429 responses.
  • Keep writes idempotent in your destination so replaying a batch is safe.
  • Version your destination schema deliberately as Lev fields expand.

What's next

Once the sync path is stable, use the same data foundation for a higher-level AI workflow.

Continue to Build a Broker Copilot