Managing storage for your cloud self-hosted n8n instance

Hello community. Hope this helps some of you that are not very technical and are self-hosting n8n instance.

We have created this maintenance guide to support community members who are self-hosting their automation stacks on budget-friendly infrastructure. Running n8n and Traefik on small cloud instances—such as those with 30GB of storage—requires a proactive approach to resource management. This guide is designed to help you check your disk usage, identify unused n8n instances, and purge old Docker images that may be clogging your system. If your instance hangs or crashes, it is likely due to Docker image bloat, massive log files, or an uncompacted SQLite database. Following these steps will help you maintain a lean, high-performance environment on both AWS and Google Cloud.

1. Quick Diagnosis

Check your disk usage immediately to see if you are in the “Danger Zone.” This command works on both AWS and GCP.

Bash

df -h /

  • 0-70%: Healthy.

  • 70-90%: Warning. Perform cleanup soon.

  • 90%++: Critical. System may become “Read-Only” or processes may hang.


2. Emergency Cleanup (Reclaim GBs Instantly)

Step A: Preview Inactive Images

See which images are taking up space but are not currently running:

Bash

sudo docker images -f “dangling=false” --format “table {{.Repository}}:{{.Tag}}\t{{.ID}}\t{{.Size}}”

Step B: The “Big Clean” (Remove Old Versions)

Every time you update n8n, Docker keeps the old version. This command removes all unused images, stopped containers, and build caches. Safe for currently running containers.

Bash

sudo docker image prune -a -f

Step C: Truncate Docker Logs

Docker container logs can grow to several gigabytes. Use this command to empty them without restarting your services:

Bash

sudo sh -c ‘truncate -s 0 /var/lib/docker/containers/*/*-json.log’


3. n8n Database Maintenance (The “SQLite Secret”)

Deleting executions in the n8n UI does not shrink the database file on disk. You must “Vacuum” the file to reclaim space.

A. The “Set & Forget” Config (Recommended)

Add these environment variables to your docker-compose.yaml to let n8n clean itself automatically:

YAML

environment:

- EXECUTIONS_DATA_PRUNE=true

- EXECUTIONS_DATA_MAX_AGE=720 # Keeps 30 days of history (720 hours)

- EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000

B. Manual Emergency Cleanup & Vacuum

If your disk is 100% full, n8n cannot prune itself. You must do it manually:

  1. Locate the Database: sudo find / -name “database.sqlite” -size +1M

  2. Stop n8n: sudo docker stop <your_container_name>

  3. Run Delete & Vacuum:

Bash

# 1. Delete executions older than 30 days

sqlite3 database.sqlite “DELETE FROM execution_entity WHERE stoppedAt < date(‘now’, ‘-30 days’);”

# 2. Delete heavy binary/JSON data linked to those executions

sqlite3 database.sqlite “DELETE FROM execution_data WHERE executionId NOT IN (SELECT id FROM execution_entity);”

# 3. RECLAIM SPACE: This shrinks the actual file size on disk

sqlite3 database.sqlite “VACUUM;”

  1. Start n8n: sudo docker start <your_container_name>

4. Scaling Hardware: Increasing Storage

If cleanup isn’t enough, you must increase the disk size in your Cloud Console and then “tell” the OS to use the new space.

:round_pushpin: Amazon Web Services (AWS EC2)

  1. AWS Console: Go to EC2 > Elastic Block Store > Volumes. Select your volume > Modify Volume > Increase Size (e.g., 30GB to 60GB).

  2. Terminal (Identify Disk): Run lsblk. AWS usually uses nvme0n1.

  3. Expand Partition & Filesystem:

Bash

sudo growpart /dev/nvme0n1 1 # Expand partition 1

sudo resize2fs /dev/nvme0n1p1 # Expand Ext4 filesystem

:round_pushpin: Google Cloud Platform (GCP)

  1. GCP Console: Go to Compute Engine > Storage > Disks. Click your instance disk > Edit > Change Size > Save.

  2. Terminal (Identify Disk): Run lsblk. GCP usually uses sda.

  3. Expand Partition & Filesystem:

Bash

sudo growpart /dev/sda 1 # Expand partition 1

# For Ext4 (Common):

sudo resize2fs /dev/sda1

# OR if using XFS (Common on some GCP images):

# sudo xfs_growfs /


5. Automated Weekly Maintenance

Prevent future issues by scheduling an automatic Docker cleanup every Sunday at midnight. This works on both AWS and GCP.

  1. Open crontab: sudo crontab -e

  2. Paste this at the bottom:

Bash

0 0 * * 0 /usr/bin/docker image prune -f


6. Troubleshooting: Bringing Instance Back Online

If you stopped n8n for maintenance and need to bring it back up:

  1. Find your container name: sudo docker ps -a (Look for the container that says “Exited”)

  2. Start the container: sudo docker start <container_name>

  3. Check logs if it fails to start: sudo docker logs --tail 50 <container_name>

1 Like