N8N not initializing - stuck after user generated settings saved

Hi I’m trying to migrate my n8n deployment from an older AWS EKS kubernetes cluster to a newer one and when I try to deploy n8n (I’m deploying via n8n helm charts onto my aws eks k8s clusters), the deployment on the new cluster gets stuck after:

Initializing n8n process
UserSettings were generated and saved to: /home/node/.n8n/config

I don’t get any logs after that even when I enable debug mode it just shows a line about lazy loading and then it gets stuck. On my older cluster it’s working as expected and it shows the logs about n8n is ready and listening to requests and the app works fine as well but nothing happens on the newer cluster.

What could be stopping n8n from initializing here?

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @Zephyra! Thanks for reaching out!

That’s odd that you are not seeing any logs at all after enabling debug mode…

Can you share some more information about your deployment? What version of n8n are you using?

We’re using app version 1.7.1, on Debian GNU/Linux 11 (bullseye) through k8s deployed via the 8gears helm chart for n8n, and database is postgres

I upgraded to the latest version and are still seeing the same behavior where it gets stuck after these logs:

Loading config overwrites [ ‘/n8n-config/config.json’, ‘/n8n-secret/secret.json’ ]
2024-06-03T04:25:46.072Z | ←[32minfo←[39m | ←[32mInitializing n8n process←[39m “{ file: ‘start.js’, function: ‘init’ }”
2024-06-03T04:25:46.164Z | ←[34mdebug←[39m | ←[34mLazy Loading credentials and nodes from n8n-nodes-base←[39m “{\n credentials: 352,\n nodes: 444,\n file: ‘LoggerProxy.js’,\n function: ‘exports.debug’\n}”
2024-06-03T04:25:46.169Z | ←[34mdebug←[39m | ←[34mLazy Loading credentials and nodes from @n8n/n8n-nodes-langchain←[39m “{\n credentials: 14,\n nodes: 70,\n file: ‘LoggerProxy.js’,\n function: ‘exports.debug’\n}”

I also tried launching a new pod in our previous eks cluster running on the older aws eks k8s version (1.21), and that pod was working as expected. Could you please let me know what might be the reason why things aren’t working on the newer aws eks k8s version? (1.29)

@Zephyra did you customize the http port by any chance to use a value less than 1000 (instead of the default 5678)?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.