Hey n8n community
,
I’ve built a fairly complex production-level workflow for one of our internal team projects, and I wanted to share some observations and ask for help on a few performance concerns.
Workflow Overview:
- Hosted using the self-hosted variant of n8n (Docker-based).
- Initially tested with a chat trigger, and later switched to a webhook trigger for production.
- Contains a Switch node that conditionally routes to one of several sub-workflows.
- Only one of the sub-flows has a loop (runs 6–10 times).
- All sub-flows call our internal APIs, which are fast – only one has a ~2s response time.
What I Observed:
- With the chat trigger, the entire workflow completes in about 10–12 seconds.
- With the webhook trigger, execution time spikes to 30+ seconds, even though the actual workflow logic remains unchanged.
Questions:
- Why does switching from chat trigger to webhook trigger cause such a significant delay? Is there something inherently slower about the webhook node or how it’s handled?
- Is n8n production-ready for low-latency use cases like this? Or are there known limitations with the webhook-based flow execution?
- Any best practices to optimize throughput, reduce turnaround time (TAT), or improve performance in self-hosted setups?
We’re seriously considering n8n as part of our long-term automation stack, so any insights on scaling and performance tuning would be super helpful.
Thanks in advance! 
@kumar_shubham
Hello, nice to meet you, this is interesting, and my first thougts are splitting the workflows down, to subworkflow and call the excute workflow node.
But then, I would also like to test this, do you have a basic version of what you’re working with to share?
I can replace urls etc, but I would like to test this too, am using dockers and you using any worker nodes? postgres db? whats ure setup plz
Many thanks,
Samuel
1 Like
Can you share the detail about the log panel?
The log panel should show how much time each nodes execute.
The information might give you some idea which node is really causing the issue.
1 Like
Hi Samuel,
Thanks for your response!
Yes, I’ve actually implemented the approach you suggested—splitting the main logic into subworkflows and using the Execute Workflow node to call them.
Unfortunately, I won’t be able to share the exact workflow due to internal project restrictions. That said, I can try to put together a replica so you can reproduce the structure and behaviour.
Regarding the setup:
- I’m not using a PostgreSQL DB or any DBs currently.
- Just the standard self-hosted Docker deployment.
- For external interactions, I’m using HTTP Request nodes to call internal APIs.
Appreciate you looking into this!
Best,
Kumar
Hi Darrell,
Thanks for pointing that out!
I did review the Execution Log panel, and here’s what I’ve noticed so far:
- The internal APIs are taking roughly the same time in both cases — whether the workflow is triggered via chat or via a webhook (tested through Postman and Python scripts).
- However, the overall execution time of the workflow increases significantly when using the webhook trigger.
- It feels like there’s extra latency introduced somewhere around the trigger or node orchestration — not within the core logic itself.
I’m going to run a few more tests to compare node-by-node execution times side by side and will share my findings here soon.
Thanks again for the suggestion — super helpful for narrowing this down.
Best,
1 Like