N8n ignores N8N_LOG_LEVEL env var value

Hi there!

I have n8n setup with kubernetes and it has been workign great. I want to lowerv the logging level to error However, when I set the environment variable N8N_LOG_LEVEL to error it is ignored. I know I am setting the environment variable right because:

  • I use a configMap where I have other settings that N8N is honoring fine. For example EXECUTIONS_MODE as queue
  • I shell into the container and I can see the environment variable there
  • I did the same steps and add the configuration with the N8N_LOG_LEVEL as error for the n8n-workers instance (for the queue mode) and it is working fine in there. I do not see any log on the n8n-workers containers

Im using docker image n8nio/n8n:0.160.0

Any ideas on how I can figure out what is going on?

Thanks

That is really odd, I would have expected it to be working if it works for the worker instance as it is the same thing.

Can you send over the terminal output from when you check the env var?

It actually does not work for the workers neither, I checked it out today and it has warning logs, it should only contain errors, I believe.

Here there are the env vars. I obscured the sensitive information:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=***
TERM=xterm
DB_POSTGRESDB_HOST=***
N8N_PORT=***
NODE_FUNCTION_ALLOW_EXTERNAL=axios,form-data,uuid
QUEUE_RECOVERY_INTERVAL=60
QUEUE_BULL_REDIS_PASSWORD=***
EXECUTIONS_TIMEOUT=3600
EXECUTIONS_PROCESS=main
N8N_LOG_LEVEL=error
QUEUE_BULL_REDIS_DB=***
DB_TYPE=postgresdb
N8N_ENCRYPTION_KEY=***
DB_POSTGRESDB_USER=n8n
QUEUE_BULL_REDIS_HOST=***
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_PORT=***
N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true
N8N_LOG_OUTPUT=console
DB_POSTGRESDB_PORT=***
WEBHOOK_TUNNEL_URL=https://***
DB_POSTGRESDB_DATABASE=***
NODE_FUNCTION_ALLOW_BUILTIN=fs
DB_POSTGRESDB_PASSWORD=***
KUBERNETES_PORT_443_TCP_PORT=443
N8N_PORT_5678_TCP_PROTO=tcp
N8N_PUBLIC_PROD_SERVICE_HOST=***
N8N_PUBLIC_PROD_PORT=tcp://***
N8N_PUBLIC_PROD_PORT_5678_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://***
N8N_SERVICE_PORT=***
N8N_PORT_5678_TCP=tcp://***
N8N_PORT_5678_TCP_PORT=***
N8N_PORT_5678_TCP_ADDR=***
N8N_PUBLIC_PROD_SERVICE_PORT=***
N8N_PUBLIC_PROD_PORT_5678_TCP=tcp://***
KUBERNETES_SERVICE_PORT=***
KUBERNETES_SERVICE_PORT_HTTPS=***
N8N_SERVICE_HOST=***
N8N_PUBLIC_PROD_PORT_5678_TCP_PORT=***
N8N_PUBLIC_PROD_PORT_5678_TCP_ADDR=***
KUBERNETES_PORT_443_TCP_ADDR=***
KUBERNETES_SERVICE_HOST=***
KUBERNETES_PORT=tcp://***
KUBERNETES_PORT_443_TCP_PROTO=tcp
NODE_VERSION=14.15.5
YARN_VERSION=1.22.5
NODE_ICU_DATA=/usr/local/lib/node_modules/full-icu
HOME=/root

That is very strange, Out of interest if you set it to debug and restart the containers does that change the amount of output you see?

Interesting enough it does show debug information when I change the level! So the issue is that the error level still shows info logs.

This is how the logs look with debug level:

2022-01-26T19:10:25.598Z | e[34mdebuge[39m    | e[34mExecuting hook (hookFunctionsPush)e[39m {"executionId":"34433322","workflowId":101,"file":"WorkflowExecuteAdditionalData.js","function":"workflowExecuteBefore"}

This is the log found when error level is set (the same as info level):

Started with job ID: 21188485 (Execution ID: 34434140)

I would think that in the error level, those info logs would not appear at all. Is this expected? The reason Im trying to remove those logs is that they do no add much information and given the amount of executions we do per hour it adds up over time in our logging service bill.

Looking at the docs it should only show errors but I am not sure if we have done something wrong with the logging or if the documentation needs to be made clearer.

On the plus side though at least the environment variable is working it is just the option, I will ask internally and see if anyone has any thoughts

Hey @Jon, did you have the chance to investigate the issue internally? Thanks!

Hey @Miguel_Caballero_Pin,

Not yet, I was meant to have a look on Monday but got distracted with something else.

I will look into it this morning before doing anything else.

Hey @Miguel_Caballero_Pin,

Quick update, I have tracked down the issue it looks like we have a stray console.log() left in our workflow runner process.

I will need to check with the core team to see why this is there and if it is needed but I suspect we can just wrap it in our normal logger and it will solve the issue.

Awesome! Thanks @Jon. Please, keep me posted.

1 Like

Hey @Jon any news on this? Thanks!

Hey @Miguel_Caballero_Pin,

Not yet, the change is going through the normal prioritisation process.

I have seen this issue while I was trying to configure queue mode. Would like to know if this is fixed
in 1.15.2 version of n8n.

In addition to this, I have another issue. when I use console.log(“print something”) in the code node in my workflow, I see that its going to the browser console log. But nothing in the container logs. (I have n8n in a kubernetes cluster) and I am trying to see something debug message in the logs of the container, when
I run this command:
kubectl logs -f n8n-worker-my-5f6786dffc-6x485 -n my-n8n-ns

Log looks like this: But console.log(“print something”) doesn’t get here:
appreciate any direction. the workflow is very simple, just a code node with console.log

2023-11-19T16:38:38.583Z | info     | Start job: 15 (Workflow ID: FtyXINka7eIX0fSG | Execution: 561) "{ file: 'worker.js', function: 'runJob' }"
2023-11-19T16:38:38.594Z | verbose  | Workflow execution started "{\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.verbose'\n}"
2023-11-19T16:38:38.596Z | debug    | Start processing node "Schedule Trigger" "{\n  node: 'Schedule Trigger',\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"
2023-11-19T16:38:38.597Z | debug    | Running node "Schedule Trigger" started "{\n  node: 'Schedule Trigger',\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"
2023-11-19T16:38:38.597Z | debug    | Running node "Schedule Trigger" finished successfully "{\n  node: 'Schedule Trigger',\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"
2023-11-19T16:38:38.598Z | debug    | Start processing node "Code" "{\n  node: 'Code',\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"
2023-11-19T16:38:38.599Z | debug    | Running node "Code" started "{\n  node: 'Code',\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"
2023-11-19T16:38:38.606Z | debug    | Running node "Code" finished successfully "{\n  node: 'Code',\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"
2023-11-19T16:38:38.608Z | verbose  | Workflow execution finished successfully "{\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'LoggerProxy.js',\n  function: 'exports.verbose'\n}"
2023-11-19T16:38:38.608Z | debug    | Executing hook (hookFunctionsSaveWorker) "{\n  executionId: '561',\n  workflowId: 'FtyXINka7eIX0fSG',\n  file: 'WorkflowExecuteAdditionalData.js',\n  function: 'workflowExecuteAfter'\n}"
2023-11-19T16:38:38.609Z | debug    | Save execution data to database for execution ID 561 "{\n  executionId: '561',\n  workflowId: 'FtyXINka7eIX0fSG',\n  finished: true,\n  stoppedAt: 2023-11-19T16:38:38.608Z,\n  file: 'sharedHookFunctions.js',\n  function: 'updateExistingExecution'\n}"
2023-11-19T16:43:08.079Z | info     | Stopping n8n... "{ file: 'worker.js', function: 'stopProcess' }"

@sbollap1 We recently realized that we were sending Code node log output to stdout, despite that not being the behavior we wanted, as that could be a security issue. So, since version 1.5.0, we have this behavior disabled by default.

For manual executions, the console.log calls in your code, should still log in the browser console. But if you want to be able to log for production executions, you can re-enable this behavior by setting the env variable CODE_ENABLE_STDOUT to true.

This is documented in the breaking changes doc.

1 Like

@netroy

N8N_LOG_OUTPUT=file,console
CODE_ENABLE_STDOUT=true

doesn’t seem to work.

Example wf:

n8n v1.136.2


The console.log() output is getting printed only in the browser. The container logs don’t show the appropriate output.

PS: The CODE_ENABLE_STDOUT doesn’t seem to be included in the n8n docs either (Logs environment variables | n8n Docs)


Update: just realised, the output gets printed to file only in the case of non-manual trigger workflows.
Is it not possible to have the output printed to file, irrespective of the trigger type?