Store execution data in different database

Describe the issue/error/question

n8n is an amazing tool, and I use it for almost all my automation needs. I have tens of workflows running 24/7, and all these workflows generate their own execution data. By properly setting up automated purges of this data, I can keep the database size low. The problem is, n8n is causing a lot of writes to my disk because of the many executions that are triggered and output logs that need to be saved. Because of that, I would like to store the n8n execution data that is stored in the execution_entity table elsewhere on my system, so I can mount it to a different disk or (preferably) mount it to tmpfs. This brings me to my question, is it possible to save the execution data n8n generates in a different database file, and if so, how?
Disabling execution data to be saved isn’t an option for me, as that would disallow me from debugging things. I also tried using a MySQL database and changing the database engine for the execution_entity table to MEMORY, but that also didn’t work due to the restricted amount of types this engine supports.

Thanks a lot for this great tool and I hope you’ll be able to help me!

What is the error message (if any)?

Please share the workflow

(Select the nodes and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow respectively)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 0.218.0
  • Database you’re using (default: SQLite): SQLite
  • Running n8n with the execution process [own(default), main]: main
  • Running n8n via [Docker, npm,, desktop app]: Docker

If I know it is impossible to save another database or place, please fix me if I’m wrong.

If you don’t need the execution history forever, N8N, by default, has a cleaning functionality.

You need to set up in an environment variables: EXECUTIONS_DATA_PRUNE and EXECUTIONS_DATA_MAX_AGE

You can find more information here.

Thanks for your response!

That’s very unfortunate. Aren’t there workarounds for this by any chance?

I’m already using these. They do indeed keep the size of the database low but they don’t change anything to the fact that execution data is still constantly being written to my disk (to put in perspective, on my instance around every few seconds a workflow triggers and thus writes its execution data to the disk). I’d love to save this execution data elsewhere on a different mount, so I can keep the writes on the disk where n8n is running on low.

My flows also run more than 1k a day, but I never had a problem with performance. I’m using Postgres with SSD.

I am no database expert or wizard but… Could you not create a view that contains all of your executions for the current day then use a trigger or similar to copy them to another database for future reference?

The tricky bit then of course would be viewing the data but with some SQL wizardry / something like Metabase you could probably create some kind of dashboard to show the data needed.

Hi Jon, thanks for your response!
That isn’t exactly what I’m trying to accomplish (if I understood what you mean right). I’m fine with purging older executions. My problem is that I want to decrease the amounts of writes to my disk, by letting execution data and logs be saved to a different mount, if that makes any sense.

Hey @MeesJ,

Ah ok I thought you wanted to just move the data. So in that case we don’t have an option for using a different storage option or location for a table. I would have thought that maybe MySQL or Postgres would have an option for this though