I am operating an n8n workflow that synchronizes data from a third-party API into my database. The workflow is triggered by a Cron node every 5 minutes, and my goal is to process each record exactly once, even when retries or failures occur.
The external API I’m integrating with has the following behavior:
• Cursor-based pagination using next_cursor
• No guaranteed ordering of records
• Duplicate records can appear across pages
• Occasional timeouts and HTTP 429 (rate-limit) errors
Describe the problem/error/question
After running this workflow for several days:
• Some records are missing
• Some records are processed more than once
• Failed executions resume with stale or incorrect cursors
• Retries sometimes re-process already handled data
• Overlapping Cron executions cause inconsistent results
What is the error message (if any)?1.Why exactly-once processing is failing, even though I’m using database upserts
How retries and crashes are corrupting my cursor state
Why Split In Batches is unsafe in my cursor-paginated workflow
How overlapping Cron executions introduce race conditions
How a partial failure mid-pagination leads to data loss or duplication
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Honestly this is a lot of questions but the core issue is that exactly-once delivery doesn’t exist at the network level, you can only get it at the storage layer with proper upserts on a stable unique key. For the overlapping cron runs, set your workflow timeout to match your cron interval or use concurrency control. Split In Batches won’t preserve cursor state across failures so you’re better off doing pagination in a Code node where you control the loop and persist the cursor yourself.
Exactly-once isn’t really a thing you can get at the network level, you have to build it at the storage layer. The real fix is upserts keyed on a stable unique ID from the source data plus a checkpoint table that persists your last successfully-processed cursor position outside of n8n’s execution context. Also set N8N_CONCURRENCY_PRODUCTION_LIMIT=1 so your cron runs don’t overlap each other.
Exactly-once processing fails because the cursor is stored in static workflow data, which is shared and non-transactional. Retries, crashes, and overlapping Cron runs advance the cursor incorrectly, causing duplicates and missing records. Split In Batches makes this unsafe.