Configuring paths for health-checks and ALB routing for n8n workflow deployment to kubernetes

Describe the problem/error/question

hello community
I need some help with gaining confidence/validation/suggestion on deploying my n8n meta-workflow (workflow calling 12 other workflows based on a switch) to a kubernetes cluster
my local docker-compose has main, runners, browserflow running along with
external RDS and S3 and an external fastapi wrapper app all running in tandem to make my workflow running end-end
my met workflow is the entrypoint for external calls and is triggered by a webhook trigger

the ops team is requesting for a path to be provided for each service for routing and health checks. now on local i could do it easily with localhost:5678/healthz but in kubernetes manifest we cannot include it this way because the domain comes into play. also providing the health path alone does not help with ingress routing
chatting with n8n docs and gemini, here is what I have figured and need some validation from folks who have done something like this before
[IMP VALIDATION POINT]the recommendation is to keep paths separate for ALB routing and liveness/readiness.
all health checks will be at specific /healthz or /pressure endpoints with a combination of the service and the port number under the livenessProbe
the routing from ALB will be only to the n8n-main service, we do not want external routing to any of runner, browserless or the kadal application
so all ALB routing paths for the workflow should be via /webhook /webhook-test
can i get some validation or suggestions on how this can be put to implementation?

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.1.1
  • Database (default: SQLite): RDS postgres for k8s
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): kubernetes
  • Operating system: linux

Hi @adarsh-lm !
If your team requires a single health check path on the ALB, you can create a dedicated internal rule or separate target group specifically for health checks, while still ensuring that runners and auxiliary services are not exposed externally. Alternatively, you can use a dedicated internal Kubernetes service exclusively for liveness/readiness probes.

Yeah your approach is solid, keep the health probes internal to the pod spec using /healthz for liveness and /healthz/readiness for readiness, those don’t touch the ALB at all. For ALB ingress just route /webhook/* and /webhook-test/* to n8n-main as a ClusterIP service and leave runners/browserless with no ingress. Make sure you set WEBHOOK_URL to your external domain so n8n generates the right callback URLs.

Yeah your approach is solid, k8s liveness/readiness probes hit the pod directly via kubelet so they never touch the ALB anyway — just set httpGet /healthz for liveness and /healthz/readiness for readiness in your pod spec and you’re good. ALB routing should only expose /webhook/* paths to n8n-main, keep everything else as ClusterIP-only services. Check out the 8gears helm chart if you haven’t, it handles this separation out of the box.