They are running in a sequential manner. n8n should behave like a normal orchestrator, but there is no option to run things in parallel.
the Worfklow is pretty simple:
Input → I need to run 5 things in parallel, wait them all and execute two more steps.
This is basic orchestration. I’ve tried the suggested approach with webhook, but this is just blocking the concurrent executions count because you need to make the subworkflow active.
Please provide a solution without affecting the overall concurrency execution.
When using standard nodes, they run sequentially. A workaround is to use execute workflow nodes to execute multiple processes concurrently. Here’s an example. The two execute workflows run concurently.
Looks like I was mistaken about my first answer. They did not run concurrently. I’m going to see if I can find a workaround and bookmark this post as well.
Hi @rbreen, thanks for looking into this. I need to run them in parallel, but not as fire and forget. I want to do something with the result. As per your example, It’s another node after “Execute Workflow” and “Execute Workflow1” nodes that collects the data and does something with it. This is a trivial thing in any workflow orchestrator. Howerver here, if I am disabling the Wait, there is no way to get the result. If I manually add a wait block it’s just waiting for the sub-workflows (and they are running again sequentially). See:
afaik its currently no easy way of running two workflows in paralell while also waiting for their output.
One solution would be to save the output of the executed sub workflows to some sort of database and check in each subworkflow if all other subworkflows have saved their data to the database. If thats true then you can start another subworkflow, which loads the data from the other subworkflows and finishes your operation (or you could place a wait node after the execution of the subworkflows in the main workflow and resume the operation of the mainworkflow with the resumeURL, you can get that URL with this expression: $execution.resumeUrl
I do not have a example on hand right now, but I hope this helps
hey @FelixL, indeed this might work, but it’s a struggle to put this together. I was hoping there is normal, easy way to do this. for me it’s a stopper. when using such tools, the expectation is to make your life easier, not overcomplicate it.
yes, I have 8 sub-workflows that are each aggregating data about different aspects of a document, each of them is receiving the same document and executing the functionality (about 10 seconds each subworkflow). currently if i am to bring this in production the users must wait 80 seconds. I want to run everything in parallel to reduce the waiting time to 10 seconds more or less.
I’ve made a simple working workflow, which implements the caching of the result in a database.
After the main workflow starts the execution of each subworkflow it starts a loop, which checks the count of the resulting items in the database.
Each subworkflow saves its result in a databse (mongodb in this case).
As soon as there is the expected amount of results in the database it stops the loop and loads all resulting items from the databse.
Finish the aggregation of all results, in my case it then just calculatios the max duration each subworkflow took to execute and the duration of the whole process.
IMPORTANT: Make sure to set a timeout in the workflow settings to prevent the loop from running indefinitely. This can occur if a subworkflow fails to complete properly.
Documentation: Settings | n8n Docs
Main Workflow:
Subworkflow:
I’ve just used mongodb as it seemed comfortable, but my implementation has some drawbacks with mongodb. I am creating a new collection each time the workflow executes. The issue here is, that there is no MongoDB node to drop a colelction again.
Depending on the data I would probably use postgresql to save the results with the execution id of the main workflow is an additional field to make it possible to count the results from the correct execution. (also possible in MongoDB or MySQL or any other database)