Bump version from 1.59.3 to 1.60.0 failed

Hi, we are currently using the version 1.59.3 in self hosted (Docker). We deployed on kubernetes cluster and we have one deployment for main, one for webhook, one for worker.
Unfortunately when we try to update the version for the pod worker it failed.
We have this issue :

2024-10-09T12:27:20.975073292Z 2024-10-09T12:27:20.974Z | error    | Error: Worker exiting due to an error. "{ file: 'LoggerProxy.js', function: 'exports.error' }"
2024-10-09T12:27:20.975299196Z 2024-10-09T12:27:20.975Z | error    | TypeError: Cannot read properties of undefined (reading 'subscribe') "{ file: 'LoggerProxy.js', function: 'exports.error' }"

And the pod restart again and again…

Do you have an idea about this issue ? thank you for your help

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

No idea ?

At least if someone could indicate us what is the best practice to update the version for a self hosted n8n :

  • should we run some migration (database)
  • is there a specific order to bump the version of n8n in the differents pods (worker, webhook, main)
    It would be a great help for us.

Thank you

The error doesn’t ring any bells for me.
But to be fair I have never used Kubernetes for n8n. (or really in general other than playing around)

Normally with such a small version jump, it should not be a problem to just change the version and n8n should then take care of any migrations needed.

Did you update the main instance first and let that do it’s thing before starting the others?

With regards to Kubernetes and updating I have no idea. Is there a specific reason you are using Kubernetes?
I see a lot a of people using Kubernetes for n8n where it is definitly not needed and only adds complexity to the set up. Of course if you are experienced with Kubernetes it is no big deal.

Hi @BramKn thank you for your help

Yes I was very surprised that it didn’t work the first time.

Why kubernetes, we manage all application deployments via kubernetes so it was the right candidate to use.

With kubernetes we can define multiple pods that we can scale, and that’s a good point because we plan to use n8n massively we need scalability on worker pods!

I think we updated the main instance first and then the workers… I’ll try the manipulation again. But what was surprising is that we started with a new database, a new redis and it still failed …

Sounds logical to go with Kubernetes in your case then :slight_smile:
In my opinion you often do not need to scale your workers perse, only if you have a lot of time sensitive tasks that are coming in big batches.

For your issue, if you start with a new database the instance should be fresh. So it seems it is something specific to your Kubernetes setup that is causing this issue.
Have you tried just using the latest version?

Yes we unactivated the worker pod and we just started the main instance with the version 1.61.0 and it work fine (i planned to test the 1.62.3). And if we try to bump the version of the worker pod to 1.61.0 it failed :frowning:

Hey @kant,

Can you share the env options you are setting for the worker?

Hi, we are using this config for our worker :

env: {
    DB_TYPE: 'postgresdb',
    DB_POSTGRESDB_HOST: 'x.x.x.x',
    DB_POSTGRESDB_PORT: '5432',
    DB_POSTGRESDB_DATABASE: 'our-database',
    DB_POSTGRESDB_USER: 'our-db-user',
    DB_POSTGRESDB_PASSWORD: '[MASKED]',
    N8N_ENCRYPTION_KEY: '[MASKED]',
    N8N_PROTOCOL: 'http',
    N8N_PORT: '5678',
    EXECUTIONS_MODE: 'queue',
    QUEUE_BULL_REDIS_HOST: 'our-redis.cache.windows.net',
    QUEUE_BULL_REDIS_PASSWORD: '[MASKED]',
    QUEUE_BULL_REDIS_PORT: '6380',
    QUEUE_BULL_REDIS_TLS: 'true',
    WEBHOOK_URL: 'https://our-webhook-url',
    N8N_METRICS: 'false',
    N8N_LOG_LEVEL: 'info'
  },

BTW: we started a new fresh install based on the version 1.62.3 (new cluster, new database, new redis, new persistentVolume) with 3 deployments (kubernetes): main, worker, webhook and we have no more problem :man_shrugging:

What’s really strange is that for the initial cluster we changed the database and redis but the problem was still there. We missed something.

If you have an idea, we still have old cluster, we can try something …

Thank a lot for your help

BTW: i saw this Queue mode error from workers
Which have same issue with LoggerProxy.js, what does it mean ? is there an issue with redis ?

I think it may have had an issue connecting to Redis, If you set the log level to debug it might show a bit more information.

Sorry it was already the case, i have N8N_LOG_LEVEL: ‘debug’

I copy/paste the full log of my pod, maybe it will help …

2024-10-11T09:30:14.607Z | debug    | Starting n8n worker... {"file":"worker.js","function":"init"}
2024-10-11T09:30:14.609105293Z 2024-10-11T09:30:14.608Z | debug    | Queue mode id: worker-xTFcYEGWBSZul5N7 {"file":"worker.js","function":"init"}
2024-10-11T09:30:14.729553104Z 2024-10-11T09:30:14.729Z | debug    | Lazy Loading credentials and nodes from n8n-nodes-base {"credentials":363,"nodes":460,"file":"LoggerProxy.js","function":"exports.debug"}
2024-10-11T09:30:14.737473450Z 2024-10-11T09:30:14.737Z | debug    | Lazy Loading credentials and nodes from @n8n/n8n-nodes-langchain {"credentials":15,"nodes":78,"file":"LoggerProxy.js","function":"exports.debug"}
2024-10-11T09:30:14.927629841Z 2024-10-11T09:30:14.927Z | debug    | [license] initializing for deviceFingerprint xxxyyyzzz {"file":"LicenseManager.js","function":"log"}
2024-10-11T09:30:14.934540168Z 2024-10-11T09:30:14.934Z | debug    | License initialized {"file":"license.js","function":"init"}
2024-10-11T09:30:14.934Z | debug    | License init complete {"file":"worker.js","function":"init"}
2024-10-11T09:30:14.936660407Z 2024-10-11T09:30:14.936Z | debug    | Binary data service init complete {"file":"worker.js","function":"init"}
2024-10-11T09:30:14.937315819Z 2024-10-11T09:30:14.937Z | debug    | External hooks init complete {"file":"worker.js","function":"init"}
2024-10-11T09:30:14.940218772Z 2024-10-11T09:30:14.939Z | debug    | External secrets init complete {"file":"worker.js","function":"init"}
2024-10-11T09:30:14.940640980Z 2024-10-11T09:30:14.940Z | debug    | Initializing event bus... {"file":"message-event-bus.js","function":"initialize"}
2024-10-11T09:30:14.943962841Z 2024-10-11T09:30:14.943Z | debug    | Initializing event writer {"file":"message-event-bus.js","function":"initialize"}
2024-10-11T09:30:14.946579989Z 2024-10-11T09:30:14.946Z | debug    | Checking for unsent event messages {"file":"message-event-bus.js","function":"initialize"}
2024-10-11T09:30:14.946920695Z 2024-10-11T09:30:14.946Z | debug    | Start logging into /home/node/.n8n/n8nEventLog-worker.log  {"file":"message-event-bus.js","function":"initialize"}
2024-10-11T09:30:14.950768966Z 2024-10-11T09:30:14.950Z | debug    | MessageEventBus initialized {"file":"message-event-bus.js","function":"initialize"}
2024-10-11T09:30:14.951Z | debug    | Event bus init complete {"file":"worker.js","function":"init"}
2024-10-11T09:30:14.954737039Z 2024-10-11T09:30:14.954Z | debug    | [Concurrency Control] Service disabled {"file":"concurrency-control.service.js","function":"log"}
2024-10-11T09:30:14.984135779Z 2024-10-11T09:30:14.983Z | debug    | [Redis] Initializing regular client {"type":"client(bull)","host":"xxx.redis.cache.windows.net","port":6380,"file":"redis-client.service.js","function":"createRegularClient"}
2024-10-11T09:30:14.989967386Z 2024-10-11T09:30:14.989Z | debug    | [Redis] Initializing regular client {"type":"subscriber(bull)","host":"xxx.redis.cache.windows.net","port":6380,"file":"redis-client.service.js","function":"createRegularClient"}
2024-10-11T09:30:14.990533296Z 2024-10-11T09:30:14.990Z | debug    | [ScalingService] Queue setup completed {"file":"scaling.service.js","function":"setupQueue"}
2024-10-11T09:30:14.991208908Z 2024-10-11T09:30:14.991Z | debug    | [ScalingService] Worker setup completed {"file":"scaling.service.js","function":"setupWorker"}
2024-10-11T09:30:14.992670635Z 2024-10-11T09:30:14.992Z | error    | Error: Worker exiting due to an error. {"file":"LoggerProxy.js","function":"exports.error"}
2024-10-11T09:30:14.992685336Z 2024-10-11T09:30:14.992Z | error    | TypeError: Cannot read properties of undefined (reading 'subscribe') {"file":"LoggerProxy.js","function":"exports.error"}

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.