With a command
n8n import:workflow --input=file.json you can import a workflow into n8n instance. And ID, the imported workflow will get, depends on database that n8n used under the hood: ID either kept from input json file or n8n creates new ID by incrementing last ID from database. That is obviously a bug, because behavior should be the same regardless of used database.
So question is: what behavior is by design? I suggest that keeping id from json file should be by design, because that way I can recreate exact copy of existing n8n instance. For example from development environment to production.
Below behavior for three databases:
- SQLite (default database): IDs are kept from json file.
- Postgres: n8n creates new IDs by incrementing
- MySQL: IDs are kept from json file.
The interesting thing that SQL queries for postgres and mysql that creating new workflow (as a result of import command) are different:
INSERT INTO "public"."workflow_entity"("name", "active", "nodes", "connections", "createdAt", "updatedAt", "settings", "staticData", "pinData") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)
INSERT INTO workflow_entity(id, name, active, nodes, connections, createdAt, updatedAt, settings, staticData, pinData) VALUES (54, ...
As you can see, for Postgres value for ID field doesn’t passed. That might be ORM issue.