The delay between receiving the webhook and the workflow starting is too high: sometimes over 90 seconds. The webhook executions stay in “new” status for this long:
We did set QUEUE_HEALTH_CHECK_ACTIVE to true and increased the concurrency up to 75!
We saw a positive impact on the speed of executions, but we still had queues that were accumulating up to around 70 executions, always up to 2:30 minutes before executing everything within a second! Very strange behavior IMO.
I can understand EXECUTIONS_DATA_PRUNE_MAX_COUNT is quite high, but it corresponds to around 15 days of history for our volume
This issue is really annoying, we’re thinking of creating another specific instance dedicated to webhooks that need faster execution.
If you have any other thoughts and suggestions, it could be really helpful!
We had multiple workers for quite a while, but we had duplicated executions that were caused by that. (see this other discussion)
Regarding webhook processors, this is something we don’t want since we need not only a response but a quick execution of the called process..
It really feels like there’s a blind spot here, an important bug that many should face, but I don’t find lots of other dicussions on the community. This is weird.
Thank you @Parintele_Damaskin
There are actually multiple workflows queued, all triggered by webhooks, and some are quite short. On average, they all complete in under 1 second once they start. It really seems like an n8n behavior, but I’m not sure how it’s configured this way.
Ok… reviewed again the docs, and trying as well to understand from all the perspectives but this it’s more a combination of:
How queue mode + workers behave.
How webhooks are handled.
And how concurrency and load are configured.
In queue modee each execution (including sub-workflows started via Sub-workflow nodes) is processed end‑to‑end by a single worker. Deepp sub-workflow chains are deliberateliy kept on the same worker…ok…if you instead trigger sub-workflows via webhoooks, each beecomes its own execution and can go to different workers…
Webhook proceessors are just another way to scale incoming webbhook traffic in quueue mode they still rely on Redis and EXECUTIONS_MODE=queu e ok…the main trade‑off is sepparation of cocerns and ability to scale receivers vs executors independently… now my brain starts allocatind more resources lol…
It means that cconcurrency is still the main limiter iin queue mode, workers pull jobs from Redis and run up to their concurrency limit in parallel. If the number of incoming webhook executions temporarily exceeds your effective concurrency (workers × concurrency per worker, ccapped by N8N_CONCURRENCY_PRODUCTION_LIMIT), executions will accumulate in qqueued until capacity free up, then they re processsed very quickly …
As ressut very low concurrency with many workers can overload the DB, so the right number is infrastructure‑dependent…
Now I am thinking at a workflow level RabbitMQ “queue” as well…