Issue with parallel workflows

Hey everyone,

I’m facing an issue with my workflow where parallel executions seem to be interfering with each other, causing incorrect calculations. After multiple tests, I suspect this could be either a bug or an unexpected behavior in how data is processed when multiple branches run simultaneously.


:small_blue_diamond: The Scenario

  • I have a workflow that runs every hour (triggered by Schedule Trigger).
  • It fetches data from an API (HTTP Request) and processes it using a Split Out node.
  • From there, several branches (sub-flows) run in parallel, each handling different types of calculations and storing results in Google Sheets.

The calculations include:
:heavy_check_mark: Counting active users per plan (monthly vs annual)
:heavy_check_mark: Counting trials started, converted, or canceled
:heavy_check_mark: Counting subscription cancellations

These calculations are performed inside Code nodes, and each branch works on the same dataset but extracts different insights.


:small_blue_diamond: The Problem

  • When I run each branch individually, the results are correct.
  • When all branches run together in parallel, the numbers become inconsistent.
  • Running the same workflow manually (step-by-step) produces the correct numbers.
  • However, when executed automatically, the calculations return incorrect values.

This suggests that parallel execution might be affecting data integrity, but I can’t pinpoint why.


:small_blue_diamond: Tests I’ve Already Tried (But Didn’t Work)

:one: Added a “Wait” node after fetching API data → No effect.
:two: Added a “Wait” node between branches to force sequential execution → No effect.
:three: Tried storing API data in a Set node before processing → No effect.
:four: Used “pairedItem” in Code nodes to maintain data reference → No effect.

:bulb: The only test that worked:

  • If I disable all branches except one, the calculations are correct.
  • This confirms that the issue happens when multiple branches execute in parallel.

:small_blue_diamond: Possible Causes (Need Help!)

  • Could there be data conflicts or race conditions when multiple Code nodes process the same dataset in parallel?
  • Does N8N handle data isolation correctly between parallel branches?
  • Is there a best practice for handling large datasets across multiple branches?

I’d love to hear if anyone has faced a similar issue or if there are best practices to avoid parallel execution problems in N8N.

Thanks in advance! :raised_hands:

Information on your n8n setup

  • n8n version: 1.81.4
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via Docker
  • Operating system:
1 Like

Have you tried adding detailed logging to your code to gain insights on how and when the data is getting skewed? It might be just some weird concurrency issue with the Sheets API.

Thanks for your suggestion! I haven’t added detailed logging to the code yet, but I checked the output of the Code node, and the data is already incorrect before the request to the Google Sheets API.

It seems like the issue might be related to how the data is being received by the Execute Workflow node. It’s as if the option “Wait For Sub-Workflow Completion” isn’t enabled, causing the flow to proceed without some prior data processing being completed.

I’ll investigate further, but does this behavior sound familiar to you?

1 Like

I ran a new test by simply changing the order of the parallel branches in my workflow. I placed the simpler/faster processes first and the most complex one last. Surprisingly, this seems to have fixed the issue!

The data is now being processed correctly, and the final counts match the expected results. This makes me wonder if the problem could be related to a combination of the data volume being received and how long it remains cached in memory.

I’ll keep monitoring. Not exactly a solution, but for now, this adjustment seems to be working.

1 Like

Following the recommendation message from n8n team, I’m posting to clarify that, as I mentioned in my reply above, I found an alternative solution to the issue. However, I still don’t have an explanation for why the issue was happening before applying this workaround. I’d be happy to learn more about it!

Thanks!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.