How to configure n8n to stream logs to AWS nodegroup logs in k8s?

Describe the problem/error/question

need help in understanding if there are certain env vars to be set for logs to be routed to AWS cloudwatch for all pods in the nodegroup. new to k8s and have a mildly passive aggressive ops team to deal with

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.1.1
  • Database (default: SQLite): postgres RDS
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): k8s
  • Operating system: linux
1 Like

Hi @adarsh-lm
dealing with ops teams is always a good time. :grin:

the short answer is no. n8n doesn’t actually have any specific environment variables for cloudwatch. it just dumps everything straight to standard output (stdout) by default.

in kubernetes, the app itself doesn’t ship the logs. the cluster does. your ops team needs to run a logging agent like fluent bit or the official aws cloudwatch daemonset across the nodegroup. that agent just sits there, scrapes the stdout from all your n8n pods, and pushes it up to aws automatically.

all you really need to do on your end is set N8N_LOG_LEVEL=info or debug in your deployment env vars. that just makes sure the pods are actually spitting out enough detail for the agent to grab.

just tell them the app logs to standard output. any standard k8s ops team knows how to route that.

Good luck!

1 Like

Hi @adarsh-lm!
Hope you’re doing wel.
You don’t need any special configuration in n8n. Just make sure n8n is logging normally to stdout (which it does by default), and ensure your nodegroup has CloudWatch Agent or Fluent Bit properly configured to collect container logs and forward them to CloudWatch.

Okay regarding the nodegroup having to be configured with a CloudWatch Agent, is this something that is to be defined in the kubernetes manifest of the app or is this something that is managed via kubectl on the cluster itself?

thank you for the clarification!

@adarsh-lm
that’s a cluster-level thing.

you don’t put any cloudwatch config in your n8n app manifest at all. your ops team manages that entirely on the cluster side using kubectl or helm.

usually they deploy something called a DaemonSet ,basically just a rule that forces a logging agent like fluent bit to run in the background on every single node.

since that agent watches the whole node, it automatically grabs the standard output from your n8n pod and ships it up to aws.