is this a valid repository? GitHub - n8n-io/n8n-eks-cluster: Multi-instance n8n setup on AWS EKS i tried it and the n8n-main and n8n-worker nodes never functioned.
Hey @Kushal_Bindra,
That is something we have been working on but it may contain issues, If you are planning to take the AWS EKS route I would recommend using that as only an example and knowledge of AWS EKS would be a requirement for using it.
Did you look into any the logs to see if there were any errors?
Hi Jon,
It looked like something that would fit my use case exactly
Iām currently getting this error for the main pod.
kubectl logs pod/n8n-main-79cf98b769-rgqrw -n test-namespace-01
Initializing n8n process
Error: There was an error initializing DB
Error: getaddrinfo ENOTFOUND postgres.development.svc.cluster.local
I also added a change to check for certificates if they already exist in the certificate manager. i can raise a PR for it.
Hey @Kushal_Bindra,
So that error looks like the database isnāt starting as the address isnāt valid at the time. I wouldnāt worry too much about pull requests on that repo as I donāt know if we will eventually close it in favour of one of the others.
Have you made any changes to the code?,
and are you running this on a kubernetes cluster created differently than as documented in the repo?
I did add an enhancement to the certificates, if root and wildcard are already present in the certificate manager then I donāt create them again, other than that i did not make any changes.
This repository isnāt ready for general production usage just yet.
Iāve made the repo private to avoid misleading people, and to also avoid creating unnecessary support work for us.
That said, we are actively working on this solution for our enterprise customers, and once we have more instances of this running in production, weāll open it back up for general use (possibly with a restrictive license).
Fair enough, i was looking to deploy the solution with the self host enterprise license. i will wait for the repo to be released
hi @netroy @Jon, can the repo be now used for eks production setup? we have purchased the enterprise license.
@Kushal_Bindra Yes, you can use that repo as a starting point, but please look at the Currently Not Implemented
section. Any production system needs to implement those to be reliable.
The reason we havenāt implemented those are:
- Lack of time and resources
- We want to build this as a very generic starting point, instead of trying to create a solution that covers the needs of every single customer.
Also, sorry that we ended up closing your PR when we made the repo private. Weāll revisit it as soon as we have someone working on this repo again.
When I follow the docā I end up with the message ā0/2 nodes are available: 2 node(s) had untolerated taint {CriticalAddonsOnly: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.ā what am I missing? I have followed the docā word-for-word (and fixed the one typo I found).
I have run kubectl get nodes -o json | jq '.items[].spec.taints'
and got null [ { "effect": "NoSchedule", "key": "CriticalAddonsOnly" } ]
Whilst chatGPT has suggested Create a new node group
by running eksctl create nodegroup --cluster my-cluster --name user-nodes
I would rather keep my system as per the documentation (or improve the documentation), what do people suggest, is this the right answer?
Looking at the storage (PVC) side it says āno persistent volumes available for this claim and no storage class is setā for ān8n-claim0ā and for āpostgresql-pvā saying āwaiting for pod postgres-77896476b6-brs5k to be scheduledā and āWaiting for a volume to be created either by the external provisioner āebs.csi.aws.comā or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.ā