What's the purpose for the n8nEventLog.log file?

Describe the issue/error/question

n8n generates files called “n8nEventLog.log”. What are these files for, doesn’t the execution entity table of n8n’s database serve the exact same purpose? Is it safe to mount this file to /dev/null?

What is the error message (if any)?

Please share the workflow

(Select the nodes and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow respectively)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 0.220.0
  • Database you’re using (default: SQLite): MariaDB
  • Running n8n with the execution process [own(default), main]: main
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

Hi @MeesJ, these files contain more detailed information on which workflow & node was executed and which other actions a user might have performed. This is part of the the log streaming functionality. It seems that while the actual streaming part only exists for enterprise users, these files are also available for users of the free community edition (at least that’s the n8n edition I am using, and I do have these files).

I’d suggest updating the environment variables to have n8n create only one very small file if you are concerned about the additional disk space required by these. That said, mounting them to /dev/null is probably fine as well, I just haven’t tested it.

1 Like

Hi, thank you for your very informative response!
For anyone reading this in the future, I’ve tried mounting the file to /dev/null and I can conclude that n8n still works fine without any issues or errors.

1 Like

Hi @MeesJ, as @MutedJam said these files are part of the log streaming feature, but not just that. They actually offer functionality for ‘regular’ users as well.
The Event Log stores a number of workflow events, in particular which workflow started, stopped or failed, as well as which nodes started to run and finished (or not). This information is used by n8n as backup store in the case of a crash, like when a workflow causes the instance to run out of memory. When this happens, the instance will at the next restart read an analyse the Event Log file(s) and check for workflows that have not finished. If it finds some, it re-creates the excecution data as far as it is available and updates the database, so that in the frontend you can see not only which workflows crashed, but which node ran last.
You can test this for yourself if you managed to crash your instance doing a manual run of a workflow in the frontend. When the instance comes back up, your frontend will show the execution as failed and an error on the node that cause the out of memory error. This is thanks to the log files.
If you send them to /dev/null this will not work any more, so I would suggest to keep the files, but maybe reduce the size. You could safely adjust the two environment variables N8N_EVENTBUS_LOGWRITER_KEEPLOGCOUNT to 2 and N8N_EVENTBUS_LOGWRITER_MAXFILESIZEINKB to 1024(1MB) or even 100, so it’s only 200kb in total. Mind you, if your are running many executions at the same time (or they have a large number of nodes), 100kb may not be enough.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.