Errors importing workflows via CLI with PostgreSQL

Has anyone had this problem? We use the CLI to import all workflows from our repository every time n8n starts in Docker. We are using PostgreSQL. When we are running docker compose from scratch (no volumes yet exist for PostgreSQL) the import works perfectly without errors. However, when there is an existing volume for PostgreSQL we run into a CLI error.

The CLI command we are running is n8n import:workflow --separate --input=/data/workflows

Here are the final lines of the build in this scenario:

n8n | UserSettings were generated and saved to: /home/node/.n8n/config
postgres | 2023-06-07 14:59:28.104 UTC [42] ERROR: permission denied to create extension “uuid-ossp”
postgres | 2023-06-07 14:59:28.104 UTC [42] HINT: Must be superuser to create this extension.
postgres | 2023-06-07 14:59:28.104 UTC [42] STATEMENT: CREATE EXTENSION IF NOT EXISTS “uuid-ossp”
n8n | Importing 32 workflows…
n8n | An error occurred while importing workflows. See log messages for details.
n8n | Cannot read properties of undefined (reading ‘forEach’)

All of the 32 files in the the specified directory are proper .json and the import works perfectly well on the very first build on a new volume. We also can’t seem to get to the aforementioned logs to investigate because the container is shutting down and constantly restarting after the error.

Are those postgres errors related? We were thinking that because in Postgres we can’t edit the id values of each workflow_entity row that the CLI might be having the same issue.

We have followed the same setup at n8n/docker/compose/withPostgres at master · n8n-io/n8n · GitHub

Any insight would be appreciated.

Information on your n8n setup

  • n8n version: 0.221.2
  • Database (default: SQLite): PostgreSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system:

I have not seen this before, but I am also using a different PostgreSQL setup.

However, seeing this sounds like a permission problem, can you try running the import with postgres superuser/root permissions by temporarily changing the respective env values in your n8n docker compose setup?

Thanks @MutedJam - this doesn’t seem to make a difference. Running as root user or non-root user results in the same error.

When take the container down with docker compose down -v and then up, it works fine using both root and non-root.

We also tried this with the default MySQLite database and it seems to happen here, too. First build goes well, but subsequent builds error on the cli workflow import step with the same error message (reading ‘forEach’) . In our sandbox we get around this by deleting the n8n directory every docker start so it builds from scratch, but on production we can’t do it this way and still retain execution history.

We also tried this with the default MySQLite database and it seems to happen here, too. First build goes well, but subsequent builds error on the cli workflow import step with the same error message (reading ‘forEach’)

Oh, that’s an interesting find. @micha I remember you looking into the db queries recently. Do you by any chance know what might cause this behaviour on the first import working, but subsequent imports failing?

At first I thought this could be an issue related to the auto-incrementing ids in postgres. We are currently in the process of testing and preparing the change from these auto-incrementing sequences to nanoids. Should happen in the next weeks.

However, the forEach error sounds more like we are trying to import a workflow that has no node data (the import has two forEach() loops, both running on the nodes array). The error thus is most likely coming from the CLI, not from the database, and they seems to occur right after it reads and parses the json and tries to loop through the content.

Have you, just to narrow things down, tried it with a single, simple workflow? Does this also break at the second import?

1 Like

Thanks @micha - we will try it with 1 simple workflow but…

the workflow files are the exact same when we use the import cli command on a new instance where the database is empty and the import works fine for all 33 of our workflow files. It behaves differently for us when trying to overwrite existing workflows in the database (apparently both with Postgres and Mysqlite).

Will try what you suggest and report back soon.

@micha and @MutedJam I can confirm that simplifying down to one workflow still produces the error when starting with an existing database.

Attaching to n8n
n8n | Importing 1 workflows…
n8n | An error occurred while importing workflows. See log messages for details.
n8n | Cannot read properties of undefined (reading ‘forEach’)

In case it matters, here is that one workflow:

[
  {
    "createdAt": "2023-04-22T14:21:34.909Z",
    "updatedAt": "2023-05-30T21:03:34.000Z",
    "id": "1",
    "name": "Current - Errors",
    "active": false,
    "nodes": [
      {
        "parameters": {
          "resource": "channelMessage",
          "teamId": "d2e20f35-d392-4c24-8449-fb9f772797ed",
          "channelId": "19:[email protected]",
          "messageType": "html",
          "message": "=<strong>Current sync error</strong>\n<ul>\n  <li><strong>Time</strong>: {{ $now.toFormat('MMMM d, yyyy h:mm:ss a') }}</li>\n  <li><strong>Workflow</strong>: {{ $json.workflow.name }}</li>\n  <li><strong>Message</strong>: {{ $json.execution.error.message }}</li>\n</ul>",
          "options": {}
        },
        "id": "a5ee1fda-cddc-46db-bbb6-fb5e509e1a14",
        "name": "Post to Teams",
        "type": "n8n-nodes-base.microsoftTeams",
        "typeVersion": 1,
        "position": [
          1000,
          400
        ],
        "notesInFlow": true,
        "credentials": {
          "microsoftTeamsOAuth2Api": {
            "id": "13",
            "name": "Current - Teams"
          }
        }
      },
      {
        "parameters": {},
        "id": "a307a541-fcde-4d50-80c2-c6c31a310713",
        "name": "Error trigger",
        "type": "n8n-nodes-base.errorTrigger",
        "typeVersion": 1,
        "position": [
          800,
          400
        ]
      }
    ],
    "connections": {
      "Error trigger": {
        "main": [
          [
            {
              "node": "Post to Teams",
              "type": "main",
              "index": 0
            }
          ]
        ]
      }
    },
    "settings": {},
    "staticData": null,
    "pinData": {},
    "versionId": "9b61df88-32e9-48e2-a295-e4cf756b0fb8",
    "triggerCount": 0,
    "tags": []
  }
]

When I delete the .sqlite file and try again, it builds successfully.

n8n | Migrations in progress, please do NOT stop the process.
n8n | Importing 1 workflows…
n8n | Migrations finished.
n8n | Successfully imported 1 workflow.
n8n | Successfully imported 14 credentials.
n8n | Initializing n8n process
n8n | n8n ready on 0.0.0.0, port 5678
n8n | Version: 0.221.2

I still can’t reproduce it. I set things up with docker 0.221.2 + postgres, mapping and importing at startup from workflows folder:

    volumes:
      - n8n_storage:/home/node/.n8n
      - ./workflows:/data/workflows
    command:
      - /bin/sh
      - -c
      - |
        n8n import:workflow --separate --input=/data/workflows
        n8n start --tunnel

and never receive an error.
I wonder if the problem here is some sort of permission issue on the tables of the db tables (which would also explain the extension error you receive)

OK. At the moment we are using SQLite with default settings. It would also happen to us on Postgres through using again default settings. We can certainly experiment with permissions. It seems like the update operation is the issue as creating workflows on first build works without issue. This happens to us both on prod and in sandboxes, so I think hosting is irrelevant. Docker on Windows is the sandbox environment and Digital Ocean for prod (though the db on prod is via Docker).

Were you able to fix this issue? I am also running into it at the moment, unable to import workflows into a running n8n docker instance.

However, when the instance is restarted all workflows import without issue.

Hi @dingo-dev - no, not able to resolve. We still have to replace the whole database (in our case MySQLite) when starting n8n in order to import workflows. It works, but of course we lose all execution history with each restart. I’m pleased to learn that it’s not just me struggling with this though. Maybe we can reignite the discussion with @micha and @MutedJam

@hndmn I was able to develop a sufficient workaround for now…

Instead of doing a complete import of the updated workflows in a directory, importing the workflows one by one seems to work without issue.

You can use this command: n8n import:workflow --input=/workflow/directory/workflow.json

1 Like

Oh, that’s a good idea. I could script that to loop through all .json files in a specific directory. Thanks for the tip!

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.