Describe the problem/error/question
I have a big workflow in which last few nodes, all taking unusually huge time in opening/editing in builder as well as in execution. See screenshot of logs below. Time taken by them is also same in seconds in every execution (which is very high). To fast edit those nodes in builder what i painfully doing is 1st disconnect them by removing inbound connections to previous nodes, than they become fast to open & edit, than edit them & than reconnecting them with inbound connections to previous nodes.
What is the error message (if any)?
Please share your workflow
Share the output returned by the last node
Input & output all are fine for each nodes as expected, workflow input & output all are fine except that time taken by last few nodes. Last node is sending notification to my personal telegram, output of it is api response from telegram which is fine.
Information on your n8n setup
- **n8n version: Latest (**1.123.4)
- Database (default: SQLite): default
- n8n EXECUTIONS_PROCESS setting (default: own, main): default
- Running n8n via (Docker, npm, n8n cloud, desktop app): npm
- Operating system: MAC
Hi @utsavmadaan823, may I ask why you are using google sheets for what seems to be queue management of data? I would generally not advise using google sheets for transactional data management in your flow. I would much rather recommend using a proper database which can handle queries a lot better and faster.
Hi,
Thanks for the reply. Actually workflow require bit of manual work too, so i make changes to google sheet, it detects it and process accordingly. Moreover results are also being maintained in google sheet column for each row of data processed, easier for me to visually see whats all being done by n8n.
Anyways google sheets is not the issue, i have 10s of google sheets nodes in same workflow, only these last 3 nodes taking unsually huge time, rest all are taking normal time, internet also not an issue, as i said only these 3 nodes that too always same time in seconds, very strange.
How much data is in the google sheet you’re trying to read?
Just implemented my workflow, right now only 5 7 line items.
Workflow runs every x minute, queries google sheet for 1 item only, processes it & done. Repeat every x minutes.
But as i told google sheets could be a performance bottle neck but that could only be in future, this workflow is new and very less line items, this issue i am reporting is not related to google sheets, check last node where sending a notification on telegram is always taking 44 seconds each time this workflow is run.
Ok yes in that case it should not take up to 30 seconds to try and return an item from the sheet. Is it possible to share your workflow with some sample data pinned in a code block. I’d like to see if I can run it on my instance so we can determine if it’s the nodes or maybe something weird with your setup. It looks like you’re just running a local npm instance?
Ok so i went on with diagnosing whats causing this issue by duplicating the workflow & experimenting changes.
Problem is this Error Gateway node. It is connected with many node’s error output. What i am doing is - if for some reason any node causes error due to which google sheets row item cannot be processed completely, through this error gateway thing - i am increasing failed counter column value by 1 + for rows with failed counter 2 or less its still get processed in next execution + but if failed counter for such row is 3 or more mark it failed so it does not get processed in next execution + notify on telegram so i am aware.
For me the workflow is what i like to be but i think its not going well with n8n coding implementation, with this many nodes connecting to error gateway, those subsequent nodes start performing very poor (both in builder opening/editing time as well as execution time whether in main or worker / dev or prod). Even though error gateway & subsequent error handling nodes are called rarely when some node caused error, it downgraded the performance for me.
So this probably should also be reported to n8n developers ? What are your thoughts & suggestions ?
So as a 1 solution/alternative - i duplicated those last nodes specifically for error cases, to keep error & normal workflow separate.
The results are promising for non-error cases, for error cases it would still lead to those huge time taking nodes at the end.
This also proved thats what was the issue which i shared in previous post. But i believe this needs to be addresses by n8n developers, when those error nodes were anyways not executing why it was downgrading the execution performance ?
Yes i know developers would suggest breaking down bigger workflow into sub-workflows (avoid monolithic workflow), but i am still curious why it causing issue when those error flow nodes are not called anyway ?
This is why we usually ask for sharing the workflow because sometimes the issues are stemming from somewhere else.
Just from the screenshot section you sent I can tell that you have a very complex and big workflow which will likely cause performance issues (even more so if you have the settings enabled to store execution data), as well as it screams with inefficiencies.
I agree with your last point that breaking this workflow down into smaller workflows will definitely have some benefits, however without the workflow I cant tell if it will solve your problem.
When splitting the workflow:
- You allow for separating your concerns (group functions which can be re-used together or pieces of logic)
- Allow each sub workflow to execute as a new execution completely separate from the main orchestration workflow
- Reduce the amount of lines you have flowing to a single node.
i definitely think changing the design of your workflow will 10x the performance issues you find and that this is likely not a n8n shortcoming. Trust me, I can build slow applications in the most performant languages.