Let us define a workflow which has 5 HTTP requests in serial. Nodes 1, 2 and 3 can be ran in parallel. Nodes 4 and 5 depends on results of 1, 2 and 3. How can we can parallelize this workflow to improve performance in a n8n idiomatic way?
Share the output returned by the last node
Information on your n8n setup
core
n8nVersion: 1.80.5
platform: docker (self-hosted)
nodeJsVersion: 20.18.3
database: postgres
executionMode: scaling
concurrency: -1
license: enterprise (production)
consumerId: 94f874ca-d834-43c9-8d19-475eb62ba0ed
storage
success: all
error: all
progress: false
manual: true
binaryMode: memory
pruning
enabled: true
maxAge: 336 hours
maxCount: 10000 executions
client
userAgent: mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/133.0.0.0 safari/537.36
I’m facing a similar issue and I don’t think there’s a clear way to solve this. I have an HTTP node that makes an API call that takes quite a bit to execute. The input to this node is usually 8-12 items. I’d like to make all those 8-12 HTTP calls concurrently. Not one after the other. Calling another workflow and not waiting means this workflow just moves on. So how does that help? I imagine that’s why @brunolnetto replied by emphasising that the outputs of the callee still need to be aggragated, so the caller workflow must wait. The problem is not waiting, the problem is that the calls are made one after the other, and not all at the same time.
I understand the issue. The previous answer was indeed not completely right.
There is loads of workarounds to be found but the easiest and most stable would be to have a table somewhere or for example redis to keep track of the requests/subflows done in parallel and then have that trigger a last workflow that can aggregate the data and continue processing. Aggregating I normally do by sending the output to a redis cache and grabbing it from there.
There is not really a native way to run certain things in parallel and then wait for them all to finish and continue the workflow.