Half of my workflows are not saved in DB after reboot

Hi all,

I am dealing with a really strange problem. I am running n8n (version 1.47.1) in a Docker container (with persistent volume) on an Ubuntu VM on a Proxmox host.

The last backup I have is from Sept. 1, and contains all workflows that have been built at that time.

I then built, tested and enabled several completely new workflows.

I have manually shut down the Ubuntu VM on Sept. 6 for a backup, and when the server came back on, all the workflows that were built in the time since Sept 1. were gone. I can’t find them in the UI, and in the database.sqlite, either.

Last entries in the Docker container log :
2024-09-06T01:31:50.691591973Z Removed triggers and pollers for workflow “yFwJN3cpilllhfhv”
2024-09-06T02:53:51.733414959Z Received SIGTERM. Shutting down…
2024-09-06T02:53:51.748316685Z
2024-09-06T02:53:51.748352699Z Stopping n8n…
2024-09-06T02:53:54.742908585Z Waiting for 2 active executions to finish…
2024-09-06T02:53:56.746222913Z Waiting for 2 active executions to finish…
2024-09-06T02:53:58.748129388Z Waiting for 2 active executions to finish…

I am hoping there is a way to explain this behavior and even better, get the lost workflows back, as it would take me 3 days to recreate them :frowning:

Any help is much appreciated. Many thanks in advance!

Best, Chris

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I am running n8n (version 1.47.1) in a Docker container (with persistent volume) on an Ubuntu VM on a Proxmox host.

The last backup I have is from Sept. 1, and contains all workflows that have been built at that time.

I then built, tested and enabled several completely new workflows, which were running/being executed successfully.

I have manually shut down the Ubuntu VM on Sept. 6 for a backup, and when the server came back on, all the workflows that were built in the time since Sept 1. were gone. I can’t find them in the UI, and in the database.sqlite, either. There is s sqlite-journal file, but that does not contain any reference to the lost workflows.

Last entries in the Docker container log are below. Seems strange that there are no entries between 8/26 and 9/5…

2024-08-26T01:26:01.115040669Z There was a problem in 'Microsoft Outlook Trigger' node in workflow 'TMVvG6AoNn63bngM': 'undefined'
2024-09-05T13:39:40.775413431Z Removed triggers and pollers for workflow "7hRjrOYShcpfdtCQ"
2024-09-06T01:31:50.691591973Z Removed triggers and pollers for workflow "yFwJN3cpilllhfhv"
2024-09-06T02:53:51.733414959Z Received SIGTERM. Shutting down...
2024-09-06T02:53:51.748316685Z 
2024-09-06T02:53:51.748352699Z Stopping n8n...
2024-09-06T02:53:54.742908585Z Waiting for 2 active executions to finish...
2024-09-06T02:53:56.746222913Z Waiting for 2 active executions to finish...
2024-09-06T02:53:58.748129388Z Waiting for 2 active executions to finish...
2024-09-06T02:54:00.751254067Z Waiting for 2 active executions to finish...

I am hoping there is a way to explain this behavior and even better, get the lost workflows back, as it would take me 3 days to recreate them.

This sucks especially since I was using this setup as a proof-of-concept for our leadership to move from Azure Logic Apps to n8n Enterprise. I’m sure there are failings on my side regarding backing up the workflows more directly, and perhaps relying on the sqlite instead of setting up a dedicated postgresql db. But it still baffles me that the integrated solution that comes with n8n would lose data upon shutting down.

Any help is much appreciated. Many thanks in advance!

edit1: The last thing done on n8n was create a error workflow that would send a Teams Channel Message.

edit2: Last-modified date on the sqlite file is 2024-09-06. Here is the ls -l output of the /var/lib/docker/volumes//_data directory:

     4096 Jun  6 23:21 binaryData
       56 Jun  6 23:21 config
        0 Jul  2 10:29 crash.journal
595775488 Sep  6 02:53 database.sqlite
101019360 Sep  6 02:17 database.sqlite-journal
     4096 Jun  6 23:21 git
 10516650 Sep  5 21:36 n8nEventLog-1.log
 10488011 Sep  5 09:06 n8nEventLog-2.log
 10499298 Sep  5 00:49 n8nEventLog-3.log
  7223255 Sep  6 02:53 n8nEventLog.log
     4096 Jun 26 00:22 nodes
     4096 Jun  6 23:21 ssh

There is another n8n container on the docker host, but with its’ own persistent volume and completely different naming convention for the stack and volume. I did check that instances’ sqlite file as well, no relation.

I don’t know if other files have been affected/lost.

I’m not a Docker expert, but I’m hoping there is something transient still in the backup. I tried looking into the overlay2 files, but don’t know what to look for.

This is my compose file for this instance (I’m using Portainer Stack) :

version: '3'

services:
  n8n:
    image: docker.n8n.io/n8nio/n8n
    ports:
      - "8182:5678"
    volumes:
      - n8n_data:/home/node/.n8n
    env_file: stack.env
volumes:
  n8n_data:

(the only variable in the stack.env is N8N_SECURE_COOKIE=false)

When I start the VM after restoring the backup, the n8n stack shows the container as “exited / Stopped for 3 days with exit code 137”.

Container Inspect shows this:

15bf2be3e55b29f7ee02e734417786f4f6150505e05088a59cd6328073046b00
  AppArmorProfile docker-default
  Args
    0 --
    1 /docker-entrypoint.sh
  Config
    AttachStderr true
    AttachStdin false
    AttachStdout true
    Cmd
    Domainname
    Entrypoint
      0 tini
      1 --
      2 /docker-entrypoint.sh
    Env
      0 N8N_SECURE_COOKIE=false
      1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
      2 NODE_VERSION=20.14.0
      3 YARN_VERSION=1.22.22
      4 NODE_ICU_DATA=/usr/local/lib/node_modules/full-icu
      5 N8N_VERSION=1.47.1
      6 NODE_ENV=production
      7 N8N_RELEASE_TYPE=stable
      8 SHELL=/bin/sh

I see the new workflows being referenced in n8nEventLog.log, here is one:

{“__type”:“$$EventMessageNode”,“id”:“d0fd1b70-7228-446e-853f-0e895ac7f4cc”,“ts”:“2024-09-05T17:47:29.871-04:00”,“eventName”:“n8n.node.started”,“message”:“n8n.node.started”,“payload”:{“workflowId”:“ZYRFCp4dK7WoyXWC”,“workflowName”:“company-acme_GetStatus”,“executionId”:“211669”,“nodeType”:“n8n-nodes-base.scheduleTrigger”,“nodeName”:“Schedule Trigger”}}

{“__type”:“$$EventMessageConfirm”,“confirm”:“d0fd1b70-7228-446e-853f-0e895ac7f4cc”,“ts”:“2024-09-05T17:47:29.872-04:00”,“source”:{“id”:“0”,“name”:“eventBus”}}

{“__type”:“$$EventMessageWorkflow”,“id”:“b2355822-a785-4d0c-89d4-b3204bc97d51”,“ts”:“2024-09-05T17:47:29.872-04:00”,“eventName”:“n8n.workflow.started”,“message”:“n8n.workflow.started”,“payload”:{“executionId”:“211669”,“workflowId”:“ZYRFCp4dK7WoyXWC”,“isManual”:false,“workflowName”:“company-acme_GetStatus”}}

{“__type”:“$$EventMessageConfirm”,“confirm”:“b2355822-a785-4d0c-89d4-b3204bc97d51”,“ts”:“2024-09-05T17:47:29.872-04:00”,“source”:{“id”:“0”,“name”:“eventBus”}}

{“__type”:“$$EventMessageNode”,“id”:“68d58d1f-8570-41b2-a5f1-8cb960df0baa”,“ts”:“2024-09-05T17:47:29.872-04:00”,“eventName”:“n8n.node.finished”,“message”:“n8n.node.finished”,“payload”:{“workflowId”:“ZYRFCp4dK7WoyXWC”,“workflowName”:“company-acme_GetStatus”,“executionId”:“211669”,“nodeType”:“n8n-nodes-base.scheduleTrigger”,“nodeName”:“Schedule Trigger”}}

{“__type”:“$$EventMessageConfirm”,“confirm”:“68d58d1f-8570-41b2-a5f1-8cb960df0baa”,“ts”:“2024-09-05T17:47:29.872-04:00”,“source”:{“id”:“0”,“name”:“eventBus”}}

{“__type”:“$$EventMessageNode”,“id”:“7981d21f-986a-416e-8b15-22cede49e3f3”,“ts”:“2024-09-05T17:47:29.872-04:00”,“eventName”:“n8n.node.started”,“message”:“n8n.node.started”,“payload”:{“workflowId”:“ZYRFCp4dK7WoyXWC”,“workflowName”:“company-acme_GetStatus”,“executionId”:“211669”,“nodeType”:“n8n-nodes-base.httpRequest”,“nodeName”:“HTTP Create Token”}}

{“__type”:“$$EventMessageConfirm”,“confirm”:“7981d21f-986a-416e-8b15-22cede49e3f3”,“ts”:“2024-09-05T17:47:29.872-04:00”,“source”:{“id”:“0”,“name”:“eventBus”}}

[…]

{“__type”:“$$EventMessageConfirm”,“confirm”:“e6290586-9419-4f7e-b94b-d27d6e8e4408”,“ts”:“2024-09-05T17:47:31.074-04:00”,“source”:{“id”:“0”,“name”:“eventBus”}}

{“__type”:“$$EventMessageNode”,“id”:“e34f2963-00da-4fd7-8350-3c30ba68cc08”,“ts”:“2024-09-05T17:47:31.074-04:00”,“eventName”:“n8n.node.started”,“message”:“n8n.node.started”,“payload”:{“workflowId”:“ZYRFCp4dK7WoyXWC”,“workflowName”:“company-acme_GetStatus”,“executionId”:“211669”,“nodeType”:“n8n-nodes-base.httpRequest”,“nodeName”:“HTTP GetStatus”}}

Hey @ChrisFG,

Welcome to the community :raised_hands:

That is an odd one, normally I would assume the volume was not set but it is in the event log which makes me thing there may have been a database issue possibly ran out of disk space which caused the workflows that were in memory and not saved to be cleared and lost forver on restart.

If you make new workflows now and save them do you see any errors in the n8n UI or the console output?