Deactivating old n8n and moving workflows to new instance

Hi. Been away for almost a year (missed 35 updates :sweat_smile:) but happy to report my workflows have all worked flawlessly in that time and it’s great to see things ticking along so nicely and the community continuing to be extremely helpful.

On my travels I discovered the Oracle Cloud Free Tier and followed @MutedJam’s awesome tutorial to get a fresh n8n instance up and running which seems to be working :ok_hand:

I noticed in the webhook node however that part of the URL has Tom’s name from his blog for some reason and just wanted to check that isn’t going to be problematic somehow?

I also now have two n8n instances obviously and so I’m wondering about best practices for deactivating the older one. Is it as simple as turning off the Hetzner server it’s on? And how should I go about copying workflows over??

Lastly, I’m currently messaging from the community account I had before but I think that’s separate, right? Shouldn’t need to make a new one? Sorry if that’s a stupid question.

Any help much appreciated, cheers

Information on your n8n setup

  • n8n version: 1.5.1
  • Database (default: SQLite): :man_shrugging:
  • n8n EXECUTIONS_PROCESS setting (default: own, main): :man_shrugging:
  • Running n8n via: Docker
  • Operating system: MacOS 12.5.1
1 Like

Hi @Tony_McHugh :wave: Welcome back! :smiley: There’s definitely been quite a few updates since you were last here.

I believe following Tom’s guide as written should be fine, especially to get up and running and test - but you will probably want to change that webhook URL eventually if you’re running a reverse proxy :sweat_smile: You can read more about that here.

I’m not familiar with Hetzner so I would assume that’s correct - you can also find some information on how to cancel here, if necessary. Before you cancel, you can find out how to export your workflows from the CLI here: CLI commands | n8n Docs That page also has how to import!

Also, no need to make another forum / community account :smiley: You’re all good, there.

1 Like

Hi @EmeraldHerald. Thanks for these. I managed to simply download and then import the workflows from old to new handy enough.

Now I want to fix the webhook URL before deleting old instance so I can make sure everything still works.

In my Docker Compose YAML file the webhook URL is WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}.

So presumably I need to change those two domain variables? Where are they defined and any idea how have I managed to set those to @MutedJam’s blog domain / subdomain?


1 Like

Where are they defined

Hi @Tony_McHugh, you wouldn’t need to change individual variables, instead you can simply change the full line to something like WEBHOOK_URL=

and any idea how have I managed to set those to @MutedJam’s blog domain / subdomain?

The blog post uses as an example in the docker compose file:

So I reckon this is where the value originally came from. You can update it anytime though (as long as you restart n8n after changing it) :slight_smile:

As for migrating your stuff over it seems like this has been sorted already following @EmeraldHerald’s suggestion.

On a slightly related note, when using Oracle’s free tier make sure to frequently backup your workflows and credentials outside of Oracle’s infrastructure. Since I originally wrote the blog post there have been a number of reports on the interwebs about them simply disabling accounts of free users, with no support available to restore access. Not sure how likely this scenario is, but it’s not a bad idea to be prepared just in case.

1 Like

Hi @MutedJam, welcome back sir. Yeah I looked through your blog post again and realised that’s probably what happened, whoops. So there’s no need to worry about my domain and subdomain variables being set incorrectly?

Is there anything else, aside from saving workflows / credentials (and simply turning off my old server), I should be aware of in terms of removing n8n?

Thanks for the info on Oracle disabling accounts. How would you recommend making regular backups?


Exactly, no need to worry. Virtually all environment variables can be updated after the first launch of n8n without any side effects (the only exception being the N8N_ENCRYPTION_KEY which only works once).

Is there anything else, aside from saving workflows / credentials (and simply turning off my old server), I should be aware of in terms of removing n8n?

Nope, just make sure to actually delete your server from the Hetzner Cloud Console if you no longer need it (to avoid having to pay for it) :slight_smile:

How would you recommend making regular backups?

n8n lets you export workflows and credentials through the CLI. So, you could regularly run the CLI commands and write backup files, then copy them to a safe location. You could even automate this with n8n if you want.

Hi @MutedJam. Thanks man. For clarity, I could create a .env file with the correct domain / subdomain, right?

In terms of automated backups, would you mind giving me some clues as to how to approach that? I’m guessing a cron node followed by execute command node?

1 Like

So my personal approach is plugging in an external HDD to my local machine once a week, then running a small bash script which would:

  1. Run the CLI commands exporting workflows and decrypted credentials via SSH
  2. Use scp to copy the exported files to my local HD
  3. Remove the data from the storage used by the container after copying

It also backups a bunch of other (non-n8n) stuff so I can’t share the full script, but this is the basic idea:

#!/usr/bin/env bash

N8N_CONTAINER="n8n-n8n-1" # The name of the n8n docker container
N8N_SSH_HOST="emma" # The name of the host where the n8n docker container is running
CONTAINER_STORAGE_PATH="/storage" # The path to the storage folder inside the docker container
HOST_STORAGE_PATH="/home/tom/container/n8n/storage" # The path to the storage folder on the docker host
LOCAL_STORAGE_PATH="." # The path to the backup folder on the local machine

# Run workflow and credentials export in container
ssh $N8N_SSH_HOST mkdir -p $HOST_STORAGE_PATH/$(date +%Y-%m-%d)
ssh $N8N_SSH_HOST docker exec -u node $N8N_CONTAINER n8n export:workflow --all --output=$CONTAINER_STORAGE_PATH/$(date +%Y-%m-%d)/workflows.json
ssh $N8N_SSH_HOST docker exec -u node $N8N_CONTAINER n8n export:credentials --all --decrypted --output=$CONTAINER_STORAGE_PATH/$(date +%Y-%m-%d)/credentials.json

# Copy the backup from the container to the local machine

# Remove the export from the container
ssh $N8N_SSH_HOST rm -rf $HOST_STORAGE_PATH/$(date +%Y-%m-%d)

This example assumes that can ssh into your VPS using ssh emma and that you’re making the host path /home/tom/container/n8n/storage available to the n8n container as /storage. On my end, I am handling this through the below line in the volumes section of my docker-compose.yml file:

      - ./storage:/storage

Your setup most likely looks a bit different, so you’d need to adjust the variables in the example script accordingly.

This is just a starting point, you can of course tweak this process in line with your needs. A good improvement could be to remove the --decrypted option and only backup encrypted credentials, but this means you’d also have to store the encryptionKey from your .n8n folder somewhere (otherwise your credentials could no longer be read by n8n).

Yep, this would also work depending on your backup location. You can in theory also run these commands on the Execute Command node (wouldn’t need to use ssh in this case), then use Read Binary Files to read the exported files and upload them to Google Drive.

Or alternatively store them in a git repository to track changes. The above is just one of many options :slight_smile:

1 Like

@MutedJam, this is great and no doubt very helpful. I’ll circle back if any issues but thank you very much :ok_hand: