Hi, we are currently using the version 1.59.3 in self hosted (Docker). We deployed on kubernetes cluster and we have one deployment for main, one for webhook, one for worker.
Unfortunately when we try to update the version for the pod worker it failed.
We have this issue :
2024-10-09T12:27:20.975073292Z 2024-10-09T12:27:20.974Z | error | Error: Worker exiting due to an error. "{ file: 'LoggerProxy.js', function: 'exports.error' }"
2024-10-09T12:27:20.975299196Z 2024-10-09T12:27:20.975Z | error | TypeError: Cannot read properties of undefined (reading 'subscribe') "{ file: 'LoggerProxy.js', function: 'exports.error' }"
And the pod restart again and again…
Do you have an idea about this issue ? thank you for your help
The error doesn’t ring any bells for me.
But to be fair I have never used Kubernetes for n8n. (or really in general other than playing around)
Normally with such a small version jump, it should not be a problem to just change the version and n8n should then take care of any migrations needed.
Did you update the main instance first and let that do it’s thing before starting the others?
With regards to Kubernetes and updating I have no idea. Is there a specific reason you are using Kubernetes?
I see a lot a of people using Kubernetes for n8n where it is definitly not needed and only adds complexity to the set up. Of course if you are experienced with Kubernetes it is no big deal.
Yes I was very surprised that it didn’t work the first time.
Why kubernetes, we manage all application deployments via kubernetes so it was the right candidate to use.
With kubernetes we can define multiple pods that we can scale, and that’s a good point because we plan to use n8n massively we need scalability on worker pods!
I think we updated the main instance first and then the workers… I’ll try the manipulation again. But what was surprising is that we started with a new database, a new redis and it still failed …
Sounds logical to go with Kubernetes in your case then
In my opinion you often do not need to scale your workers perse, only if you have a lot of time sensitive tasks that are coming in big batches.
For your issue, if you start with a new database the instance should be fresh. So it seems it is something specific to your Kubernetes setup that is causing this issue.
Have you tried just using the latest version?
Yes we unactivated the worker pod and we just started the main instance with the version 1.61.0 and it work fine (i planned to test the 1.62.3). And if we try to bump the version of the worker pod to 1.61.0 it failed
BTW: we started a new fresh install based on the version 1.62.3 (new cluster, new database, new redis, new persistentVolume) with 3 deployments (kubernetes): main, worker, webhook and we have no more problem
What’s really strange is that for the initial cluster we changed the database and redis but the problem was still there. We missed something.
If you have an idea, we still have old cluster, we can try something …