N8n ignores N8N_LOG_LEVEL env var value

Hi there!

I have n8n setup with kubernetes and it has been workign great. I want to lowerv the logging level to error However, when I set the environment variable N8N_LOG_LEVEL to error it is ignored. I know I am setting the environment variable right because:

  • I use a configMap where I have other settings that N8N is honoring fine. For example EXECUTIONS_MODE as queue
  • I shell into the container and I can see the environment variable there
  • I did the same steps and add the configuration with the N8N_LOG_LEVEL as error for the n8n-workers instance (for the queue mode) and it is working fine in there. I do not see any log on the n8n-workers containers

Im using docker image n8nio/n8n:0.160.0

Any ideas on how I can figure out what is going on?

Thanks

That is really odd, I would have expected it to be working if it works for the worker instance as it is the same thing.

Can you send over the terminal output from when you check the env var?

It actually does not work for the workers neither, I checked it out today and it has warning logs, it should only contain errors, I believe.

Here there are the env vars. I obscured the sensitive information:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=***
TERM=xterm
DB_POSTGRESDB_HOST=***
N8N_PORT=***
NODE_FUNCTION_ALLOW_EXTERNAL=axios,form-data,uuid
QUEUE_RECOVERY_INTERVAL=60
QUEUE_BULL_REDIS_PASSWORD=***
EXECUTIONS_TIMEOUT=3600
EXECUTIONS_PROCESS=main
N8N_LOG_LEVEL=error
QUEUE_BULL_REDIS_DB=***
DB_TYPE=postgresdb
N8N_ENCRYPTION_KEY=***
DB_POSTGRESDB_USER=n8n
QUEUE_BULL_REDIS_HOST=***
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_PORT=***
N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true
N8N_LOG_OUTPUT=console
DB_POSTGRESDB_PORT=***
WEBHOOK_TUNNEL_URL=https://***
DB_POSTGRESDB_DATABASE=***
NODE_FUNCTION_ALLOW_BUILTIN=fs
DB_POSTGRESDB_PASSWORD=***
KUBERNETES_PORT_443_TCP_PORT=443
N8N_PORT_5678_TCP_PROTO=tcp
N8N_PUBLIC_PROD_SERVICE_HOST=***
N8N_PUBLIC_PROD_PORT=tcp://***
N8N_PUBLIC_PROD_PORT_5678_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://***
N8N_SERVICE_PORT=***
N8N_PORT_5678_TCP=tcp://***
N8N_PORT_5678_TCP_PORT=***
N8N_PORT_5678_TCP_ADDR=***
N8N_PUBLIC_PROD_SERVICE_PORT=***
N8N_PUBLIC_PROD_PORT_5678_TCP=tcp://***
KUBERNETES_SERVICE_PORT=***
KUBERNETES_SERVICE_PORT_HTTPS=***
N8N_SERVICE_HOST=***
N8N_PUBLIC_PROD_PORT_5678_TCP_PORT=***
N8N_PUBLIC_PROD_PORT_5678_TCP_ADDR=***
KUBERNETES_PORT_443_TCP_ADDR=***
KUBERNETES_SERVICE_HOST=***
KUBERNETES_PORT=tcp://***
KUBERNETES_PORT_443_TCP_PROTO=tcp
NODE_VERSION=14.15.5
YARN_VERSION=1.22.5
NODE_ICU_DATA=/usr/local/lib/node_modules/full-icu
HOME=/root

That is very strange, Out of interest if you set it to debug and restart the containers does that change the amount of output you see?

Interesting enough it does show debug information when I change the level! So the issue is that the error level still shows info logs.

This is how the logs look with debug level:

2022-01-26T19:10:25.598Z | e[34mdebuge[39m    | e[34mExecuting hook (hookFunctionsPush)e[39m {"executionId":"34433322","workflowId":101,"file":"WorkflowExecuteAdditionalData.js","function":"workflowExecuteBefore"}

This is the log found when error level is set (the same as info level):

Started with job ID: 21188485 (Execution ID: 34434140)

I would think that in the error level, those info logs would not appear at all. Is this expected? The reason Im trying to remove those logs is that they do no add much information and given the amount of executions we do per hour it adds up over time in our logging service bill.

Looking at the docs it should only show errors but I am not sure if we have done something wrong with the logging or if the documentation needs to be made clearer.

On the plus side though at least the environment variable is working it is just the option, I will ask internally and see if anyone has any thoughts

Hey @jon, did you have the chance to investigate the issue internally? Thanks!

Hey @Miguel_Caballero_Pin,

Not yet, I was meant to have a look on Monday but got distracted with something else.

I will look into it this morning before doing anything else.

Hey @Miguel_Caballero_Pin,

Quick update, I have tracked down the issue it looks like we have a stray console.log() left in our workflow runner process.

I will need to check with the core team to see why this is there and if it is needed but I suspect we can just wrap it in our normal logger and it will solve the issue.

Awesome! Thanks @jon. Please, keep me posted.

1 Like

Hey @jon any news on this? Thanks!

Hey @Miguel_Caballero_Pin,

Not yet, the change is going through the normal prioritisation process.