2.and 3. understood. Basically limitations of n8n which I think should not be there specially wrt to sensitive data like passwords being provided statically during start up.
It is not just one app. It is n8n that we are trying to deploy in our org that would be used by any number of apps that have the need and thus they may interact with each other.
Think of it in publisher-subscriber pattern terms. There can be 10 subscribers subscribing to same event of a publisher. Now the subscribers can be anywhere, it does not matter who they are, the publisher just needs to do âoneâ thing - publish the event. Publisher should not be bothered about who the user is, they just publish the event notifying the subscribers and rest follows.
In the same way, here the publisher app should just be able to call an api (similar to firing the event) and all the workflows (subscribers) that use the same webhook (subscription event) should be able to do there job. The publisher app does not need to know which workflow or which user it is going to be.
In terms of DB, here is what it may look like. This is actual n8n DB but sample workflows:
Here, there are two workflows - 5 and 7. I have a sample .net web api app written which is supposed to trigger these two workflows when âtriggerGreetingsâ API is invoked. Here is the code:
[HttpPost("triggerGreetings")]
public async Task<string> TriggerGreetings(GreetingParams param)
{
//Create greeting; just the response of this API which only wraps the input param and also creates a string of the form "<salutation>, <name>"; this is similar to hello world api but the saluation and name are provided as input param
Greeting g = Greet(param).Value;
//Call n8n webhook
//1. Get the webhook from config
string webhookUrl = _configuration.GetValue<string>("n8n:webhook");
//2. Make the webhook call
using (HttpClient httpClient = new HttpClient())
{
HttpResponseMessage resp = await httpClient.PostAsJsonAsync(webhookUrl, g).ConfigureAwait(false);
//some more logging and error handling code that I have abridged
return $"Triggered the webhook {webhookUrl} for \"{g.Greetings}\"";
}
The problem is at step 2. in above code. I donât think any app developer would agree to maintain a list of webhook urls to loop through at step 2 for each workflow and user. That is not their job. Also it makes the app very non-scalable, high maintenance and brittle - it breaks the moment any of the url changes potentially impacting other workflows.
Here this publisher app is invoking one web hook url that is statically saved in config file and read dynamically to trigger the workflow.
Now connecting that to the DB image above, workflows 5 and 7 both have their own webhook urls (since they have the path field set uniquely). I wanted this path field to be the same (webhook and not webhook1 and webhook2) for both workflows. Then the publisher app would just have that one url and invoke as above, triggering both the workflows. n8n could just find out which workflows are using the called webhook from above table and invoke all those workflows! A simple pub-sub pattern. It does not matter which workflow is that and who the user is. Each workflow would just do its job!
I wouldnât see 2 and 3 as a limitation, You would only need to provide the credentials at startup through the API if you are not able to provide a static encryption key through an environment variable or config file. In theory all of this information would be stored in a database anyway maybe something postgres and to connect to that database the credentials for that would also need to be passed either through a config file or encryption key.
We could in theory look at moving some of the environment options to the database which some fields makes a lot of sense but what would happen if someone managed to get hold of your database? They would have your credentials and your encryption key so keeping them split seems like a good idea to me. If someone got hold of your encryption key you can export your current credentials, Change the key then import them so there is a solution for the key getting out.
Mulitple webhook URLs are common in almost every application we deal with as they would allow you to have different URIs for different event triggers if needed. In your case though that looks like you are just using an incoming webhook so what you can do with the one URL is pass in any data and have your workflow read the body, header or URI options and then through the use of if nodes or switches call sub workflows that would apply for each operation you have to handle.
Looking at this I suspect there has been confusion between creating a trigger node and just using the incoming webhook node, That being said there is possibly not much difference between them in terms of approach and it all depends on if you wanted to post to different URIs and handle it in your application or post to one URI and send through some other param that control what it is for.
That is quite obvious Jon. For different events there would be different URLs. The question is about same event being subscribed by multiple workflows. Why in that case should there be multiple URLs for each workflow?
There are no sub-workflows or any kind of conditions at all!!! The images above show two separate very simple workflows - one with one http request node and another with two http request nodes so essentially there triggering event is the same and they would get the same input from the source app but the work they are doing is completely different. They are unrelated but they both want to execute on same event of publisher app i.e., triggerGreetings event.
The use case is very simple - multiple independent workflows without any if conditions need to do their own work based on the âsameâ event from the the publisher app. Publisher app wonât keep maintaining the list of webhook urls as more workflows become interested in the âsameâ event. Publisher would just call one url and it should invoke all the workflows that use that url!
And I am only talking about webhook node here. I am assuming trigger node uses something similar. But right now I am only trying to work with webhook node.
It sounds like youâre describing an event hub, which natively n8n isnât, but it can function as one. You just need a canonical single definition of an incoming webook, and then you perform your routing, via conditionals & optionally subflows, from thereâŚ
or just use a tool actually designed designed for that job, Redis, Kafka, RabbitMQ etc. and consume that from n8n with a trigger from as many workflows as you want. No tightly coupled URLs breaking things that way.
@Atul_Lohiya At the moment the options are going to be what I mentioned earlier with handling it in the workflow with extra information being sent to help identify the request using just one workflow or by using something like Pemontto has mentioned like Redis which would allow you to do a bit more without having to use a webhook.
@pemontto@Jon
Like I mentioned earlier too, there are no sub-workflows or conditions involved at all. There is no single master definition that I am looking for that does everything - not looking for a silver bullet, not looking for messaging system. I know all those exist and they are not workflows really otherwise I would have opted those in the first place!
There is a single webhook URL that I am looking for that does âone commonâ thing!
Let me repeat my example here:
There is an employee onboarding app and there are four other apps ALL of which need to know exactly the same single thing - an employee has been onboarded - and then do âtheir own independentâ work. The four apps are - finance, infra, developmentTeam and backgroundCheck.
Now each team goes into n8n and creates there own workflow all of which start at the same event - employeeOnboarded event! So this is what the workflows look like:
financeWorkflow starts with webhookNode (has finance url and receives onboarded employee info) â openBankAccount â openPFAccount
infraWorkflow starts with webhookNode (has infra url and receives onboarded employee info) â orderADesk â orderAPhone â orderAccessCard
devTeamWorkflow starts with webhookNode (has dev url and receives onboarded employee info) â orderVMWithPreinstalledApps â giveAccessToVariousAccounts â addToDistLists
backgroundCheckWorkflow starts with webhookNode (has backCheck url and receives onboarded employee info) â sendNotificationToAgency â sendNotificationToManagement
So all the above four workflows start with webhook node with a âuniqueâ webhook url to receive the âsame singleâ employeeOnboarded event and thus exactly the same data! Note that it is not multiple events, it is the âone singleâ event that all those âindependentâ apps are interested in.
This is not messaging that I am looking for to allow the apps to interact with each other. I want all those four âindependentâ apps to be able to start their work as soon as an employee is onboarded.
In the employee onboarding app, the employeeOnboarded event occurs.
Now with the way n8n works today, the onboarding app needs to know four different webhook urls for four of above workflows and invoke them all. What happens if tomorrow two more workflows get added that need to use the same event and info? Onboarding app will need to change for no reason really. It is maintaining a list of urls that n8n should be maintaining and be able to figure out which webhook url is for which workflow. And it does that today actually with the webhook_entity table. Just that it requires the urls to be unique for each workflow even though they are responding to âexact sameâ event. And it thus also requires the onboarding app to maintain that list of different urls which is unnecessary.
This is what should be happening instead:
all the above four workflows should be able to have just one single webhook path and thus the same url. So instead of infra url, finance url, dev url and backChec url, all of them should be allowed to have a common path say empOnboarded and thus just one empOnboarded url.
This will still be saved in webhook_entity table as it happens today. Only the urls will not be unique to each workflow. The same single url will be saved against all those four workflows.
The onboarding app should just call this one single empOnboarded url. And n8n should be able to figure out which workflows need to be executed based on the url. That can be found from webhook_entity table. It will return the ids for the four workflows that need to be executed instead of just one.
Execute all the workflows that are returned from step 2!
On the call to the webhook URL are you planning to just fire in all of the data or some of the data and it may be done over a couple of requests?
I would handle this in n8n with subworkflows so you would have one webhook that gets the data and that would use the Execute Workflow node to load a sub workflow so you would have 4 sub workflows that all start with the normal Start node. Each of these workflows will have all of the data from the parent workflow available and you can use the variables / data that you need to.
If you were to add another process you would just need to add a new subworkflow that uses the data you want and as long as you send it in your request everything is all good.
This does assume that you are posting data to the endpoint and not just calling the webhook, If you are just calling the webhook it is the same process but the sub workflows would need to be created to go off and source the data from whatever system holds it.
We do something similar to this with our internal GDPR deletion workflow with a few extra steps for validating the sub workflows completed.
Ok, so that seems like a workaround to this particular example. The issue with this approach is which team maintains the master/parent workflow? And yes, the data will be posted with the webhook call.
Also, I was thinking in general terms. Above workaround means if any user wants to create a workflow to do something custom to their usecase, they will need to create a sub workflow and then contact the maintainer of parent workflow to invoke this new subworkflow. That in itself is not scalable and there would definitely be not a person who would be maintaining this parent workflow.
Thatâs why I was looking for a common webhook url kind of solution. That way then any number of users could go in and create any number of workflows. Ofcourse all of them must be interested in the same initial event, that is a given. The only thing that all of the users must know is the webhook path which they can easily be given by the source/publisher app. They can contact the app team or the team may have some doc which mentions the steps to consume their data based on the specified webhook path etc!
Then there could be hundreds of users creating hundreds of workflows all getting started by the same single event of same source app. Each workflow then can do its own job independently.
But looks like that is not possible with n8n as of today!
That is pretty much describing an enterprise service bus/event hub. Action X happens (new account) and X many services/teams (0âŚ1000s) care about that, they all subscribe to the new_account channel/queue.
As youâve stated it can be done in n8n, but doesnât work with your organisations usage/ownership of the system. Another reason to shift to consuming dedicated external system that can granularly manage access
What is the purpose of a webhook or trigger node then in light of above statement? While Jon and I seem to be on same page now, you need to understand the thing I am trying to describe better.
It is not possible in the way that you want it but it is still possible to work around it, If you are giving your users access to n8n to set up their own worklfow they could also add an extra run option to the parent workflow at the same time it is not ideal but it would solve the issue.
I am running out of ideas on other things you could try, The only thing I have left is to use a database that looks up the IDs of the workflows that need to run then create a loop in the parent worflow that extracts them maybe using a tag so your users could make their flow, tag them and the parent flow would then automatically call it.
hmmâŚthe DB one seems like a doable workaround, will try that. Thanks for that.
Edit: I am trying this workaround. I created a master workflow with the webhook node. I am not sure which node to use after that to connect to DB and get the workflows with a particular tag. I added tags to child workflows, I know which table they are sitting in. I donât know how to fetch them from the master workflow irrespective of the underlying DB type - sqlite/mysql etc. Once I have that list, I can probably use the execute workflow node using the âparameterâ as source where parameter gets its value from the previous node that gets the ids from DB.
Also, I am trying to setup MySql DB. I am not sure what npm command to run. I checked the docs at Configuration - n8n Documentation. It just mentions use export command. But I am not sure how to do that. Is that an npm command or an n8n command? If n8n, in which folder?
I tried below from packages/cli/bin but it shows below error.
âexportâ is not recognized as an internal or external command,
operable program or batch file.
I also tried this but to no avail:
packages\cli\bin>export DB_TYPE=mysql && export DB_MYSQLDB_DATABASE=n8n && export DB_MYSQLDB_HOST=127.0.0.1 && export DB_MYSQLDB_PORT=3306 && export DB_MYSQLDB_USER=atul && export DB_MYSQLDB_PASSWORD=Atul@mysql123
âexportâ is not recognized as an internal or external command,
operable program or batch file.
Export is an OS command to set an environment variable, I think Windows uses Set in the same way. The n8n CLI command is not for setting a database it used to export data from the database.
Thanks for the help so far. Here is a new hurdle that I have hit! And I just canât grasp my head around how to fix it.
In my organization, they are not allowing to download the source code of n8n! Moreover, they have 0.176.0 version of cli (which I hope controls the version of n8n as a whole) available separately and also all other packages that are present in the packages folder in mono-repo except the node-dev package. So here are my few questions:
Which packages form the workflow engine so I can eliminate others like editor-ui and not worry about installing them on our openshift container? The pressure is to just have the engine quickly up and running and develop custom UI to call the n8n APIs as needed.
In such a scenario, how do I develop a custom node and build and deploy it?
How do I install each individual package so that n8n gets installed and runs properly without errors?
Which dependencies are needed to make it work on a RHEL7 container? For e.g., on my windows machine I had to install python and windows build tools. But this is linux container. So do I still need to install the python and any linux specific build tools? Is there any sample docker file that shows how to install each individual transpiled package (remember there is no source code available for us) and start n8n properly?
That sounds like an interesting problem, So without the UI I am not sure how you would get it working but it sounds more like an embedded approach rather than just using n8n.
I am not sure how you would build your workflows unless you were to build them on a different instance and then use the API or CLI to import them.
If you are not using source will you be installing from npm instead or using one of our existing docker images? Using just the CLI package is unlikely to work.
To make it work on a RHEL container you would first need to work out how you are going to actually get it running once that is worked out you can find our docker files on Github we do have one for RHEL n8n/Dockerfile at master ¡ n8n-io/n8n ¡ GitHub but given the limitations you have I suspect there will be more to it.
Out of interest why canât you just install the npm version? It would be a lot easier for you, If needed I am sure we can answer any questions that the team blocking the standard install methods might have.
I am continuing to understand n8n better. In that effort, I wanted to understand following things. Can you help me understand these or point me to right resources?
When a workflow executes, where is the intermediate data from various applications/services/integrations stored? Data returned from one service may be used in multiple following nodes so that must be getting persisted somewhere?
Saving above mentioned intermediate data becomes necessary if the workflow just waits for next nodeâs execution, say for e.g., when an http nodeâs target call may be long running, possibly for hours because it invokes a batch job. In such a case, the http call should return immediately making the workflow to enter a suspended state or something and when the long running job ends, the workflow could resume. In such a case, it becomes necessary to save the intermediate application data from nodes prior to suspending so that it is available for use after resumption of workflow.
I am fairly sure everything is kept in memory instead of saving it to the database on each step which would cause things to run slower, There is an option you can toggle to store it in the database but I am not sure if you can easily access that from within a nodes code and I think it is just for replaying executions later if they fail.
For the second part you can use the wait node with the webhook option so you can make a call then use the second webhook url the execution generates to resume the workflow.
Failed executions, say for e.g., because a server restart or a long running http call are the reasons I am interested in persisting the intermediate data. Which option can I look at to enable this persistence? Any docs? Which class handles this?
It is under the workflow settings as âSave Execution Progressâ this is briefly mentioned here: Workflows - n8n Documentation
As for the class it is likely going to be one of the workflow ones found in the core package, In theory all you need to do is change the dropdown or set the default env option for that value and all is good and from the execution log you should be able to tell it to re-run.