N8N cloud ran out of space

No space left on device

Since 4 days almost all our workflows fail. There is no space left on the device. We use N8N cloud hosted on a Pro plan. 

The error message: Problem in node ‘Azure Storage‘

ENOSPC: no space left on device, mkdir ‘/home/node/.n8n/binaryData/workflows/USvNGc5yuMG5Out8/executions/14045’

Already done:

  • upgraded to latest N8N version
  • Deleted executions to clear up space - In a project, under executions, using some filters I bulk deleted some runs from a couple days back
  • Deleted Archived workflows using N8N API

need help with:

  • Where can I see a breakdown of the space used on device for N8N cloud? - what is causing this issue to begin with?
  • How can I clean up the space used on the device (efficiently)?

Information on your n8n setup

  • n8n version: 1.111.0
  • Database : default: SQLite
  • n8n EXECUTIONS_PROCESS setting : default: own, main
  • Running n8n via: n8n cloud
  • Operating system:

Ran out of space in n8n cloud - how to clear out temp files? - #2 by n8n is the exact same problem.

I set up a workflow to remove more execution data, and I cleaned up all archived workflows, but the problem remains. Please help detailing what is taking up all the space in N8N cloud!

There is supposed to be 100GB of data storage. I can’t imagine that we use up all that space.

Hi Jaco,

I am charles from n8n support, can you please log a ticket by sending an e-mail to [email protected] , can you also state how many days worth of executions you have deleted?

There is currently no way to check your storage space from the user side.

In relation to clearing up space the only options really are to Stop Saving Unnecessary Executions

  • Go to the Admin PanelManage.

  • In Executions to Save, deselect types you don’t need (e.g., only save errors, not successful runs).

  • For individual workflows:

    • Open the workflow’s Settings (three-dot menu).

    • Set Save successful production executions to Do not save.

      Please log a ticket and myself or a member of my team will be in contact shortly, please let me know when you have logged the ticket and it’s number. I agree that it seems unlikely on the face of it that you are using that much data and we will look into it once we have the ticket.

2 Likes

Hey there,
For any future users who run into this exact issue. Recap what most users know:

  • Executions take up space. The data in each node is saved. You can configure which executions are saved per type, and manually override that per workflow
  • Deleting executions frees up space. We set up a N8N housekeeping flow to request executions and delete those older than X days. This way we can both monitor the work done by successful executions, and not retain too much data. Running the housekeeping flow temporarily removed the issue.

The real problem in our case was the following node: Azure Store Get Blob! We noticed during configuration of the node that is was slow, almost unresponsive. The reason - we were looking for a file in a giant datalake container, with 1000’s of files. We were not getting those 1000 files, just one, a JSON with <100 tokens. I figure N8N retrieves the filenames (and perhaps even the files?) and that took up some serious space on the disk.

The Solution: We created a separate container in Azure Storage, containing far fewer files. Problem solved.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.