Problem Updating n8n on Docker: Losing Data & Encryption Key

Hello everyone,

I’m seeking advice on the safest way to update my n8n instance and understand a persistent data issue I’m facing.

I’m currently running an older version of n8n on an AWS EC2 instance. Every attempt to update to a newer version fails: the container starts up and sends me to the initial setup screen, as if all my data has been lost. I’ve been able to recover by restoring an AWS snapshot, but I need a sustainable update path.

Key Diagnostics & Findings:

After extensive troubleshooting, we’ve discovered a very unusual situation:

  1. Bind Mount is Being Ignored: My original docker run command specifies a bind mount (-v ~/.n8n:/root/.n8n). However, the n8n container is completely ignoring this path. The data in /home/ec2-user/.n8n is an old, empty database.

  2. Actual Data Location: The live, in-use database is actually being written to the container’s internal overlay filesystem. On the host, the path is: /var/lib/docker/overlay2/[long-hash-string]/diff/home/node/.n8n/database.sqlite.

  3. Data is Verified: I have confirmed that the database at this overlay2 path is large (~542 MB) and contains all of my workflows (88 total, verified with sqlite3). The associated config file in this directory also contains the correct encryptionKey.

  4. The Update Still Fails: Despite having a verified backup of this correct data, every attempt to restore it for the new version fails. My process has been:

    • Create a new Docker Managed Volume (docker volume create n8n_data).
    • Restore the verified backup files (database.sqlite, config, etc.) into this volume.
    • Correct the permissions inside the volume so all files are owned by the node user (chown -R 1000:1000).
    • Launch the new n8n container, pointing to this correctly populated and permissioned volume (-v n8n_data:/root/.n8n).

Even with all this, the new container fails to start correctly and reports that the encryption key was not found.

My Questions for the Community:

  1. Why would a Docker container completely ignore a specified bind mount and default to its internal storage? Is this a known behavior in certain n8n versions or on specific environments like Amazon Linux?
  2. Is there a known issue with security modules (like SELinux on Amazon Linux) that would prevent a container from reading a file from a Docker Managed Volume, even when the user/group permissions (UID/GID) are correctly set?
  3. Given this persistent and strange behavior, what is the most robust strategy to migrate this data and successfully update? Would switching to Docker Compose fundamentally solve this, and if so, what is the recommended way to handle this initial data migration?

Any insights or suggestions would be greatly appreciated. Thanks!

What is the error message (if any)?

When attempting to start the updated container with the restored data, the logs consistently show this error, which leads to a fresh setup screen:

No encryption key found - Auto-generating and saving to: /home/node/.n8n/config

Please share your workflow

Not applicable. This issue is related to the deployment environment, not a specific workflow.

Share the output returned by the last node

Not applicable.

Information on your n8n setup

  • n8n version:

    • Current (working): An older version (e.g., 1.98.1).
    • Target (failing): 1.104.1
  • Database (default: SQLite): SQLite

  • n8n EXECUTIONS_PROCESS setting (default: own, main): default (own)

  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker. I am not using Docker Compose, only the following docker run command:

    # Note: The specific version/tag was omitted in the original command.
    sudo docker run -d --restart unless-stopped -it --name n8n \
    -p 5678:5678 \
    -e N8N_HOST="subdomain.your-domain.com" \
    -e WEBHOOK_TUNNEL_URL="[https://subdomain.your-domain.com/](https://subdomain.your-domain.com/)" \
    -e WEBHOOK_URL="[https://subdomain.your-domain.com/](https://subdomain.your-domain.com/)" \
    -v ~/.n8n:/root/.n8n \
    n8nio/n8n
    
    
  • Operating system: Amazon Linux on an AWS EC2 t3.medium instance.

Hi Edwin,

I’m using docker compose and Postgres for storage so not the same situation but I did notice that your volume flag is mapping the local .n8n directory to /root/.n8n on the container. The n8n image uses /home/node/.n8n for its data store within the container so this might be your problem.

That is, change your -v flag to

~/.n8n:/home/node/.n8n \
n8nio/n8n

See the docs here: Docker | n8n Docs

Hope this helps.
Simon

3 Likes

Hey @Simon.Lewis,

Quick update: Success! :tada: Thank you both so much for your help. Your insights were exactly what I needed to finally understand the root cause and solve this.

Final Diagnosis & Root Cause

You were both 100% correct. The core problem from the very beginning was the incorrect target path in my original docker run command’s bind mount.

  • I was using: -v ~/.n8n:/root/.n8n
  • The correct path is: /home/node/.n8n

This mismatch caused the container to completely ignore my specified volume and write all its data to the container’s ephemeral overlay2 filesystem. This is why my backups of the host directory were always empty and why every restart after an update failed to find the data.

The Successful Update & Migration Strategy

Following your advice, especially a recommendation from @Jon in discord to use a managed volume, I migrated the entire setup to Docker Compose. For anyone in the future facing a similar issue, here was the final, successful plan:

  1. Created a clean backup from the correct overlay2 path we discovered during diagnostics.
  2. Set up a docker-compose.yml file, making sure to use the correct target path for the volume. This is now my single source of truth for the configuration.
    # docker-compose.yml
    services:
      n8n:
        image: n8nio/n8n:1.104.1
        # ... other settings
        volumes:
          - n8n_data:/home/node/.n8n # <-- The crucial corrected path!
    volumes:
      n8n_data:
        name: n8n_production_data
    
  3. Tested in Parallel: Before touching the production container, I deployed the new Docker Compose configuration on a separate port (5679) with a restored copy of the data. This allowed me to verify that the update, migrations, and new setup worked perfectly without any downtime.
  4. Went Live: After confirming the test was successful, I took down the old container and deployed the new Docker Compose setup on the production port.

The update to v1.104.1 is now complete, and the system is stable. Using Docker Compose will make all future updates incredibly simple.

Thanks again to this amazing community! This issue is officially solved.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.