Deleting executions is resource intensive

Deleting executions via the UI from my n8n instance seems to cause high CPU usage. After some time of a maxed out CPU, I decide to redeploy the instance. Once redeployed the CPU is back to normal but executions selected do not seem to be deleted. Does anyone have any ideas on why this is occuring? Should I just start a new instance from scratch and import my existing workflows?

n8n setup

  • n8n version: 0.217.2
  • Database: SQLite
  • Running n8n with the execution process: main
  • Running n8n via: Docker

Additional info

  • using a Raspberry Pi 4B on Ubuntu Server 22.04 LTS
  • 2GB RAM with 4GB swap

Hi @d4vidsha

What way are you deleting the executions? I assume with the CLI?
Have you tried with just connecting to the database and clearing the table that way?

I have been deleting executions from the UI, unsure how I can delete executions from CLI looking at the docs or from directly within the sqlite database. Could you provide some examples on how this can be done?

Hopefully this advice still applies today.


Thanks to @BramKn and @jonflow, I’ve deleted all executions with these steps now:

  1. Ensure instance is offline.
  2. Keep a backup of existing .n8n directory by using
    mv .n8n/ .n8n.bak
  3. Since I use sqlite as my database of choice, I access my database with
    sqlite3 database.sqlite
  4. Since I want to delete all executions to date, I run
    DELETE FROM execution_entity WHERE startedAt <= date('now');
  5. Quit sqlite client with
  6. (Optional) If using N8N_DEFAULT_BINARY_DATA_MODE=filesystem, you can delete the binaryData directory too as all related executions will already be deleted from the step 4 command. From my experience, the binaryData directory was quite large (7.5GB), as it contains the files of past executions, so deleting this reclaimed 99% of the space.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.