Exactly-once processing failure in paginated

I am operating an n8n workflow that synchronizes data from a third-party API into my database. The workflow is triggered by a Cron node every 5 minutes, and my goal is to process each record exactly once, even when retries or failures occur.
The external API I’m integrating with has the following behavior:
• Cursor-based pagination using next_cursor
• No guaranteed ordering of records
• Duplicate records can appear across pages
• Occasional timeouts and HTTP 429 (rate-limit) errors

Describe the problem/error/question

After running this workflow for several days:
• Some records are missing
• Some records are processed more than once
• Failed executions resume with stale or incorrect cursors
• Retries sometimes re-process already handled data
• Overlapping Cron executions cause inconsistent results

What is the error message (if any)?1.Why exactly-once processing is failing, even though I’m using database upserts

  1. How retries and crashes are corrupting my cursor state
  2. Why Split In Batches is unsafe in my cursor-paginated workflow
  3. How overlapping Cron executions introduce race conditions
  4. How a partial failure mid-pagination leads to data loss or duplication

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Honestly this is a lot of questions but the core issue is that exactly-once delivery doesn’t exist at the network level, you can only get it at the storage layer with proper upserts on a stable unique key. For the overlapping cron runs, set your workflow timeout to match your cron interval or use concurrency control. Split In Batches won’t preserve cursor state across failures so you’re better off doing pagination in a Code node where you control the loop and persist the cursor yourself.

1 Like

Exactly-once isn’t really a thing you can get at the network level, you have to build it at the storage layer. The real fix is upserts keyed on a stable unique ID from the source data plus a checkpoint table that persists your last successfully-processed cursor position outside of n8n’s execution context. Also set N8N_CONCURRENCY_PRODUCTION_LIMIT=1 so your cron runs don’t overlap each other.

1 Like

Hi @Keira_Becky

Exactly-once processing fails because the cursor is stored in static workflow data, which is shared and non-transactional. Retries, crashes, and overlapping Cron runs advance the cursor incorrectly, causing duplicates and missing records. Split In Batches makes this unsafe.

Store cursor + lock in DB:

CREATE TABLE sync_state (
source TEXT PRIMARY KEY,
cursor TEXT,
locked BOOLEAN DEFAULT false
);

Lock execution:

UPDATE sync_state
SET locked = true
WHERE source = ‘items’ AND locked = false;

Fetch one page:

GET /v1/items?cursor={{ $json.cursor }}

Idempotent write:

INSERT INTO items (id, updated_at)
VALUES (:id, :updated_at)
ON CONFLICT (id) DO NOTHING;

Advance cursor after success:

UPDATE sync_state
SET cursor = :next_cursor, locked = false
WHERE source = ‘items’;

This will help you to have safe retries, no overlaps, no missing or duplicate data

2 Likes

Thanks @achamm @Niffzy

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.