I set up a workflow to remove more execution data, and I cleaned up all archived workflows, but the problem remains. Please help detailing what is taking up all the space in N8N cloud!
There is supposed to be 100GB of data storage. I can’t imagine that we use up all that space.
I am charles from n8n support, can you please log a ticket by sending an e-mail to [email protected] , can you also state how many days worth of executions you have deleted?
There is currently no way to check your storage space from the user side.
In relation to clearing up space the only options really are to Stop Saving Unnecessary Executions
Go to the Admin Panel → Manage.
In Executions to Save, deselect types you don’t need (e.g., only save errors, not successful runs).
For individual workflows:
Open the workflow’s Settings (three-dot menu).
Set Save successful production executions to Do not save.
Please log a ticket and myself or a member of my team will be in contact shortly, please let me know when you have logged the ticket and it’s number. I agree that it seems unlikely on the face of it that you are using that much data and we will look into it once we have the ticket.
Hey there,
For any future users who run into this exact issue. Recap what most users know:
Executions take up space. The data in each node is saved. You can configure which executions are saved per type, and manually override that per workflow
Deleting executions frees up space. We set up a N8N housekeeping flow to request executions and delete those older than X days. This way we can both monitor the work done by successful executions, and not retain too much data. Running the housekeeping flow temporarily removed the issue.
The real problem in our case was the following node: Azure Store Get Blob! We noticed during configuration of the node that is was slow, almost unresponsive. The reason - we were looking for a file in a giant datalake container, with 1000’s of files. We were not getting those 1000 files, just one, a JSON with <100 tokens. I figure N8N retrieves the filenames (and perhaps even the files?) and that took up some serious space on the disk.
The Solution: We created a separate container in Azure Storage, containing far fewer files. Problem solved.