Hey everyone,
I’m facing an issue with my workflow where parallel executions seem to be interfering with each other, causing incorrect calculations. After multiple tests, I suspect this could be either a bug or an unexpected behavior in how data is processed when multiple branches run simultaneously.
The Scenario
- I have a workflow that runs every hour (triggered by Schedule Trigger).
- It fetches data from an API (HTTP Request) and processes it using a Split Out node.
- From there, several branches (sub-flows) run in parallel, each handling different types of calculations and storing results in Google Sheets.
The calculations include:
Counting active users per plan (monthly vs annual)
Counting trials started, converted, or canceled
Counting subscription cancellations
These calculations are performed inside Code nodes, and each branch works on the same dataset but extracts different insights.
The Problem
- When I run each branch individually, the results are correct.
- When all branches run together in parallel, the numbers become inconsistent.
- Running the same workflow manually (step-by-step) produces the correct numbers.
- However, when executed automatically, the calculations return incorrect values.
This suggests that parallel execution might be affecting data integrity, but I can’t pinpoint why.
Tests I’ve Already Tried (But Didn’t Work)
Added a “Wait” node after fetching API data → No effect.
Added a “Wait” node between branches to force sequential execution → No effect.
Tried storing API data in a Set node before processing → No effect.
Used “pairedItem” in Code nodes to maintain data reference → No effect.
The only test that worked:
- If I disable all branches except one, the calculations are correct.
- This confirms that the issue happens when multiple branches execute in parallel.
Possible Causes (Need Help!)
- Could there be data conflicts or race conditions when multiple Code nodes process the same dataset in parallel?
- Does N8N handle data isolation correctly between parallel branches?
- Is there a best practice for handling large datasets across multiple branches?
I’d love to hear if anyone has faced a similar issue or if there are best practices to avoid parallel execution problems in N8N.
Thanks in advance!
Information on your n8n setup
- n8n version: 1.81.4
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via Docker
- Operating system: