Reliable cursor based API pagination

Hi :waving_hand: guys
I’m building an automation in n8n to sync data from a third-party API into our database. The API uses cursor-based pagination, returning up to 100 records per request and a next_cursor value if more data exists. There is no total record count, and cursors expire after 15 minutes.
The workflow runs nightly and may process anywhere from 5,000 to over 100,000 records. The API enforces rate limits and occasionally returns 429 and 500 errors, so retries and backoff are required.
The main challenge is reliability. If the workflow fails midway, it must resume exactly from the last successful cursor without duplicating or skipping records. Since n8n is running in queue mode with multiple workers, I also need to avoid concurrency issues where two executions process the same page.
I’ve tried using Split In Batches, static workflow data, and looping with IF nodes, but I keep encountering problems like:
• Infinite loops due to repeated cursors
• Duplicate records after retries
• Loss of cursor state on crashes
• Difficulty coordinating pagination safely across workers

Describe the problem/error/question

How would you design a robust n8n workflow that handles cursor-based pagination safely, respects rate limits, avoids duplicates, and can resume after failure especially when running in queue mode with multiple workers?

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @Keira_Becky

I can see you’re trying to reliably sync a cursor based API

Try this

  1. Use a DB-backed cursor and lock
  • Store your last successful cursor in a table (sync_state)

  • Use a lock table (sync_lock) to ensure only one worker paginates at a time

  1. Fetch pages sequentially
  • Read cursor → call API → write data → update cursor

  • Save the cursor after each page so you can resume after crashes

  1. Write records idempotently

    INSERT INTO records(api_id, data, updated_at)
    VALUES (:api_id, :data::jsonb, NOW())
    ON CONFLICT(api_id) DO UPDATE SET
    data = EXCLUDED.data, updated_at = NOW();

  2. Handle rate limits

  • Retry 429 / 500 errors with exponential backoff

  • Do not advance cursor until data is safely written

  1. Loop until no more pages
  • Only advance cursor when a page is successfully processed

  • Release the lock at the end

This setup will help you avoids duplicates, resumes safely, and works in queue mode with multiple workers.

2 Likes

hey, the db-backed approach is the right call here. For the worker concurrency thing specifically, you really need a proper row-level lock in postgres before fetching each page — something like SELECT ... FOR UPDATE SKIP LOCKED on your cursor state row, that way if another worker somehow picks up the same job it just skips instead of duplicating work. Also make sure you’re committing the new cursor only after the records are written, not before, otherwise a crash leaves you with a gap.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.