More of a question than a problem: is there a performance penalty incurred from invoking a sub workflow via HTTP Post/Webhook vs calling it via the Execute Workflow trigger? I use the HTTP method for various reasons and I actually wrote up a tutorial over the weekend showing a specific dev tactic I use called “chokepoints” for creating modular workflows. But I’m interested if anyone has done a benchmark side-by-side comparison quantifying the performance degradation. I have heard that HTTP req method kicks off an additional process so perhaps at scale that becomes a big deal?
Using a common standard like HTTP post for this seems advantageous from a pure interoperability perspective. And the technique I wrote up using Postman/Pipedream for making “chokepoints” in a modular application has been helpful personally for isolating functionality during development and debugging. But I don’t have any feel for the performance hit I may be incurring doing it the way I am. Are there any other advantages (security maybe?) of using the Execute Workflow trigger?
What is the error message (if any)?
No error. More a question to see if anyone has benchmarked this.
Information on your n8n setup
n8n version: 1.56.2
Database (default: SQLite): SQListe
n8n EXECUTIONS_PROCESS setting (default: own, main): own
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
A http request does indeed create a new process of a workflow being executed. Probably also going outside of your network to then execute something on your n8n again. Which would probably not be good for performance. Also I do not really see a reason for using HTTP/webhook anymore since you can set the execute workflow node to not way for the subworkflow now. Which was the only reason to really do it in the past.
Thanks for your reply. Has anything changed in the last few releases with respect to the ability to pass variables into a sub workflow? I haven’t tested this lately but as recently as 3mos ago I could find no way to “black box” sub workflows. By that I mean I want the ability to constrain the variables I pass into a workflow and have it return a predictable result. It would be the equivalent of the Scenario Inputs feature in Make.com. The only way I could figure out how to accomplish this was via the HTTP Req/Webhook method for doing modular workflows. These docs make it seem like passing data to a sub workflow is possible but I could never get it to work.
A subworkflow will be triggered with whatever data comes into the node when it is triggered. So just need to make sure the data is being the input with a set node for example.
I have also created a community node for logging to add some parameters to a subflow, this can also be used just to call a subflow with parameters, it is not restricted to being a logger node but it gives you some extra data by default like workflow Id and execution id of the parent.
ahhh ok this is good to know and that would explain my issue as I was attempting to reference data from the initial catch hook in the master workflow from the sub workflow. But yea if the sub workflow can see data from a Set node in the master workflow then that should work. Thanks for the clarification. Will keep your logger utility in mind for added insight on execution. cheers