Option for blocking parallel workflow execution

When using cron trigger with a short interval it can happen that a workflow is running in parallel. When the workflow is chewing through some data the second instance would do the same and this builds up.

For the time being I use a blocker like the one below. But I think this is something that can be handled in an option for the workflow even better.

    {
  "nodes": [
    {
      "parameters": {},
      "name": "Start",
      "type": "n8n-nodes-base.start",
      "typeVersion": 1,
      "position": [
        260,
        -390
      ]
    },
    {
      "parameters": {
        "triggerTimes": {
          "item": [
            {
              "mode": "everyMinute"
            }
          ]
        }
      },
      "name": "Cron",
      "type": "n8n-nodes-base.cron",
      "typeVersion": 1,
      "position": [
        260,
        -140
      ]
    },
    {
      "parameters": {
        "conditions": {
          "boolean": [],
          "string": [],
          "number": [
            {
              "value1": "= {{$json[\"runningWorkflows\"]}}",
              "operation": "larger",
              "value2": 1
            }
          ]
        }
      },
      "name": "IF already running",
      "type": "n8n-nodes-base.if",
      "typeVersion": 1,
      "position": [
        880,
        -270
      ]
    },
    {
      "parameters": {},
      "name": "NoOp",
      "type": "n8n-nodes-base.noOp",
      "typeVersion": 1,
      "position": [
        1070,
        -290
      ]
    },
    {
      "parameters": {
        "functionCode": "const returnItems = [];\nvar runningWorkflows = 0;\n\nfor (const item of $node[\"Get current Executions\"].json[\"data\"]) {\n  if(item.workflowId == $workflow.id) {\n    runningWorkflows++;\n  }\n}\n\nreturnItems.push({json: {\"runningWorkflows\": runningWorkflows}});\n\nreturn returnItems;\n"
      },
      "position": [
        710,
        -270
      ],
      "name": "Get runningWorkflows",
      "type": "n8n-nodes-base.function",
      "typeVersion": 1
    },
    {
      "parameters": {
        "url": "={{$node[\"Init\"].parameter[\"values\"][\"string\"][0][\"value\"]}}/rest/executions-current",
        "allowUnauthorizedCerts": true,
        "jsonParameters": true,
        "options": {}
      },
      "position": [
        530,
        -270
      ],
      "name": "Get current Executions",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 1
    }
  ],
  "connections": {
    "Start": {
      "main": [
        [
          {
            "node": "Get current Executions",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Cron": {
      "main": [
        [
          {
            "node": "Get current Executions",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "IF already running": {
      "main": [
        [
          {
            "node": "NoOp",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Get runningWorkflows": {
      "main": [
        [
          {
            "node": "IF already running",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Get current Executions": {
      "main": [
        [
          {
            "node": "Get runningWorkflows",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}

I would be really interested in this as well. I am using this for alerts management and i have racing conditions. For example:

  1. Worflow 1 executes and checks if alert with given hash has been created. It has not so it proceeds
  2. Second workflow is doing the same but it is faster
  3. Workflow 1 tried to create alert but it has been created already by another execution.

Please advise on how to avoid this. If there is a variable EXECUTIONS_PROCESS to control parallelism between workflows, we need the same to force n8n to wait on each execution before executing the others.

1 Like

It’s a mandatory feature to avoid on-field problems.

2 Likes

Yep, moving from Integromat to n8n and having to find work-arounds for this. We have two critical workflows that we must guarantee are not run twice for the same items.

It should be a workflow setting with an option to either save oncoming triggers in a queue or refuse execution.

2 Likes

I really need this feature to prevent parallel workflow executions. Especially for the schedule trigger node.

1 Like

FWIW, my ideal solution for this would probably be a boolean slider in the Workflow Settings that says something like “Prevent parallel executions” (tooltip: “Only allow a single instance of this workflow to run at a time”). Thanks for considering!

I agree! Running into issues where webhooks are getting called so quickly that the my upsert commands are creating multiple items instead of updating those items. There are lots of easy ways around this if I owned the API, but it’s Eventbrite’s API, and they don’t have separate webhooks events for Attendee creation/modification (I have no idea why).

You can use this implementation via RabbitMQ.

Works great How to avoid a race condition with parallel jobs started by a callback? - #3 by Mulen

Ah, so RabbitMQ just turns the parallel execution into series execution. Cool! I’ll look into it.

Really would like n8n to have this feature built-in, though. It just means they need a local incoming webhook store, either in-memory or on-disk, which catches webhooks separately before passing them on to execution. It might even already be doing this anyways, unless it’s only using parallel listeners for every webhook or something…

cant you have one flow trigger the second flow at a certain part of the first flow? as a sub flow?