N8n in worker / queue mode config for webhook?

Hello,

I need to make sure the community version hosted works perfectly in queue mode with:

  • a primary instance
  • a webhook
  • a worker

Actually, it seems to work but something looks weird to me.
I’ve set a form from n8n, and the only url working will be the production one, like;

https://webhook-processor-production-myhoster.app/form/myform
The test url
https://webhook-processor-production-myhoster.app/form-test/myform
Will receive:
Cannot GET /form-test/myform

The primary instance has been assigned with a domain, handled on clouflare, and it works perfectly (no port assigned, directly the subdomain) like:
https://n8n.domain.com

I can access to it, creating workflows etc…
But, if I try:
https://n8n.domain.com/form/myform
I’ve got a 404 from n8n.

So for the webhook I’ve defined a new cname into Cloudlfare like:
https://webhook.n8n.domain.com/form/my-form

But while trying to access it, I’ve got a:

Ce site ne peut pas fournir de connexion sécurisée

webhook.n8n.domain.com utilise un protocole incompatible.

ERR_SSL_VERSION_OR_CIPHER_MISMATCH

I also would expect that the test and production generated would match the subdomain, but still having n8n generating from:
https://webhook-processor-production-myhoster.app

I hope this is clear enough to get a bit of help.
Here, I’d like the worker mode even without the Enterprise edition, to be abble to scale with a robust solution, and, hopefully, reach the Startup plan at some point :smiley:
Thanks for your help !

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

EDIT : ok for some how have this issue:
Cloudflare does not accept easily that sub.sub.domain thing.

So the workaround is to declare a simplier classic subdomain.
webhook-n8n.mydomain.com

Like this, you don’t have to handle the TLS yourself (in this case you could have the sub.sub.domain thing)

I also redeployed my instance to apply the tested subdomain.

Hope that helps ! :slight_smile:

1 Like

That is handy, Looking at your message I would have assumed it was a load balancer issue or maybe editor_url wasn’t set.

Nice work.

A bit tricky yes :slight_smile:
The only issue I have, is that in the production url, it’s adding the port number like this:

https://my-sub-sub.domain.com:5678/form/myform

I have to take it off while using a webhook. Might be something related to the declared domains, or maybe something to perfeclty polishing with something like Traefik…

Hey @NiKoolass

That sounds like you don’t have the webhook_url set correctly can you share the env options you have set?

Sure !

Primary instance

DB_TYPE="postgresdb"
ENABLE_ALPINE_PRIVATE_NETWORKING="true"
EXECUTIONS_DATA_MAX_AGE="672"
EXECUTIONS_DATA_SAVE_ON_ERROR="all"
EXECUTIONS_DATA_SAVE_ON_SUCCESS="all"
EXECUTIONS_MODE="queue"
N8N_DISABLE_PRODUCTION_MAIN_PROCESS="true"
N8N_ENCRYPTION_KEY="xxx"
N8N_LISTEN_ADDRESS="::"
N8N_LOG_LEVEL="info"
N8N_USE_PYTHON_FUNCTIONS="true"
NODE_FUNCTION_ALLOW_BUILTIN="*"
PORT="5678"
QUEUE_BULL_REDIS_HOST="${{Redis.REDISPROXYHOST}}"
QUEUE_BULL_REDIS_PASSWORD="${{Redis.REDIS_PASSWORD}}"
QUEUE_BULL_REDIS_PORT="${{Redis.REDISPROXYPORT}}"
QUEUE_BULL_REDIS_USERNAME="${{Redis.REDISUSER}}"
N8N_WEBHOOK_TEST_URL="https://main.website.com"
N8N_HOST="prod-main.website.com"
N8N_PROTOCOL="https"
N8N_WEBHOOK_URL="https://prod-main.website.com"
N8N_EDITOR_BASE_URL="https://main.website.com"

Webhook processor

DB_TYPE="postgresdb"
ENABLE_ALPINE_PRIVATE_NETWORKING="true"
EXECUTIONS_DATA_MAX_AGE="672"
EXECUTIONS_DATA_PRUNE="true"
EXECUTIONS_DATA_SAVE_ON_ERROR="all"
EXECUTIONS_DATA_SAVE_ON_SUCCESS="all"
EXECUTIONS_MODE="queue"
N8N_ENCRYPTION_KEY="${{Primary.N8N_ENCRYPTION_KEY}}"
N8N_LISTEN_ADDRESS="::"
N8N_LOG_LEVEL="debug"
N8N_PROTOCOL="https"
N8N_HOST="main-prod.website.com"
N8N_USE_PYTHON_FUNCTIONS="true"
NODE_FUNCTION_ALLOW_BUILTIN="*"
PORT="5678"
QUEUE_BULL_REDIS_HOST="${{Redis.REDISPROXYHOST}}"
QUEUE_BULL_REDIS_PASSWORD="${{Redis.REDIS_PASSWORD}}"
QUEUE_BULL_REDIS_PORT="${{Redis.REDISPROXYPORT}}"
QUEUE_BULL_REDIS_USERNAME="${{Redis.REDISUSER}}"
WEBHOOK_URL="https://prod-main.website.com"
N8N_WEBHOOK_TUNNEL="true"
N8N_EDITOR_BASE_URL="https://main.website.com"

Worker

DB_TYPE="postgresdb"
ENABLE_ALPINE_PRIVATE_NETWORKING="true"
EXECUTIONS_DATA_MAX_AGE="672"
EXECUTIONS_DATA_PRUNE="true"
EXECUTIONS_DATA_SAVE_ON_ERROR="all"
EXECUTIONS_DATA_SAVE_ON_SUCCESS="all"
EXECUTIONS_MODE="queue"
N8N_ENCRYPTION_KEY="${{Primary.N8N_ENCRYPTION_KEY}}"
N8N_LISTEN_ADDRESS="::"
N8N_LOG_LEVEL="info"
N8N_USE_PYTHON_FUNCTIONS="true"
NODE_FUNCTION_ALLOW_BUILTIN="*"
PORT="5678"
QUEUE_BULL_REDIS_HOST="${{Redis.REDISPROXYHOST}}"
QUEUE_BULL_REDIS_PASSWORD="${{Redis.REDIS_PASSWORD}}"
QUEUE_BULL_REDIS_PORT="${{Redis.REDISPROXYPORT}}"
QUEUE_BULL_REDIS_USERNAME="${{Redis.REDISUSER}}"
WEBHOOK_URL="https://prod-main.website.com"
N8N_SKIP_WEBHOOK_SETUP="true"

That will be it, you only have WEBHOOK_URL on one worker on the others you are using N8N_WEBHOOK_URL which isn’t one of our options.

You have the tunnel enabled on one, I would recommend checking the env options you have set against our documentation to see what you should be using.

1 Like

Hello !
Thanks, it did the trick.
But now it’s really weird as it is much slower and got timeout on files of 20 Mb.
I could get to 30 Mb , and have set this in all 3 .env instance.
I had some workflows executed in 5 secs, now they take more than a minute, even 2 min with a 20 Mb file…
More specifcaly I do have a 524 from Cloudflare. Unproxy my instance corrected this, but as it was working fine, I’m a bit surprised of this limitations suddenly :astonished:

N8N_PAYLOAD_SIZE_MAX="512"
N8N_FORMDATA_FILE_SIZE_MAX="2048"
EXECUTIONS_TIMEOUT="-1"
EXECUTIONS_TIMEOUT_MAX="600"

If that inspire you any comment, please do :slight_smile:
Thanks for your help !

The cloudflare 524 is generally not an issue on the n8n side and is unlikely to be related.

With the longer time to run it can take a bit longer as jobs are put into redis to be picked up by the worker nodes but it is usually only a couple of seconds so I suspect there could be something else going on.

What are you doing with the files are they being sent to n8n, downloaded using n8n or sent to another service through a workflow.

I’m using a http request to indeed pass the files.
I’ve worked quite hard on the configutaration, and now it seems all right ! (tested successfully pdf files up to 35 Megs )

1 Like

@NiKoolass Could you provide your final configuration for all three components, please? I am trying to build a similar setup, but I am not sure what you did about WEBHOOK_URL mentioned by @Jon. Did you add it to all three? Also, some of the envs you used seems to be outdated/deprecated by now.

Hello,
They are indeed, and the new runner mode is making my life hard these times…
I hope n8n team won’t force the runner mode too soon.
It looks like we must redo all our deployement with docker images to include additionnal npm libs with this new runner feature.
There’s a fun post around

Cheers