Should filter node create a new branch for a sub-execution?

Describe the problem/error/question

I am running a workflow that has a lot of data, so I’m using the loop node with an execute sub-workflow node to keep memory under control. I recently added another filter node to the sub-workflow and when that produced no output, I was surprised to see that the output of the sub-workflow had two branch outputs, the 2nd being the discarded data from the filter node, which then blew up my memory. I have worked around this with some extra logic in the sub-workflow, but the behavior surprised me.

Is this really the way people would want filter to work? I would expect to get no output, or an empty object if I set the ‘always produce output’ option, which is what I am using.

If this was an if node, then it would make sense that a dangling else output would make another branch, but not a filter node.

What is the error message (if any)?

No error

Please share your workflow


This is not the real flow, but shows the issue.

Main workflow:
{
  "nodes": [
    {
      "parameters": {},
      "type": "n8n-nodes-base.manualTrigger",
      "typeVersion": 1,
      "position": [
        0,
        0
      ],
      "id": "6ad9f445-fea3-4ea1-8a5e-6554ea1d1489",
      "name": "When clicking ‘Execute workflow’"
    },
    {
      "parameters": {
        "options": {}
      },
      "type": "n8n-nodes-base.splitInBatches",
      "typeVersion": 3,
      "position": [
        220,
        0
      ],
      "id": "571c878d-e088-4a8d-91af-d7570a338316",
      "name": "Loop Over Items"
    },
    {
      "parameters": {
        "workflowId": {
          "__rl": true,
          "value": "HIyrZja4jAgdPem2",
          "mode": "list",
          "cachedResultName": "Test sub-workflow"
        },
        "workflowInputs": {
          "mappingMode": "defineBelow",
          "value": {},
          "matchingColumns": [],
          "schema": [],
          "attemptToConvertTypes": false,
          "convertFieldsToString": true
        },
        "options": {}
      },
      "type": "n8n-nodes-base.executeWorkflow",
      "typeVersion": 1.2,
      "position": [
        440,
        0
      ],
      "id": "b3c53c41-50a5-4c90-ae98-5c650b8d3b3a",
      "name": "Execute Workflow"
    }
  ],
  "connections": {
    "When clicking ‘Execute workflow’": {
      "main": [
        [
          {
            "node": "Loop Over Items",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Loop Over Items": {
      "main": [
        [],
        [
          {
            "node": "Execute Workflow",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Execute Workflow": {
      "main": [
        [
          {
            "node": "Loop Over Items",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  },
  "pinData": {
    "When clicking ‘Execute workflow’": [
      {
        "name": "First item",
        "code": 1
      },
      {
        "name": "Second item",
        "code": 2
      }
    ]
  },
  "meta": {
    "instanceId": "ad9cf2e17ebbbded2408dd876a4325e6ed035be4cece49d385a5040ddb1a5a1c"
  }
}

Sub-workflow:
{
  "nodes": [
    {
      "parameters": {
        "inputSource": "passthrough"
      },
      "id": "c055762a-8fe7-4141-a639-df2372f30060",
      "typeVersion": 1.1,
      "name": "When Executed by Another Workflow",
      "type": "n8n-nodes-base.executeWorkflowTrigger",
      "position": [
        260,
        340
      ]
    },
    {
      "parameters": {
        "conditions": {
          "options": {
            "caseSensitive": true,
            "leftValue": "",
            "typeValidation": "strict",
            "version": 2
          },
          "conditions": [
            {
              "id": "e3ce0519-b944-4c77-9a06-fe591b64319b",
              "leftValue": "={{ $json.name }}",
              "rightValue": "Not there",
              "operator": {
                "type": "string",
                "operation": "equals",
                "name": "filter.operator.equals"
              }
            }
          ],
          "combinator": "and"
        },
        "options": {}
      },
      "type": "n8n-nodes-base.filter",
      "typeVersion": 2.2,
      "position": [
        480,
        340
      ],
      "id": "a9afe372-170e-4182-946b-3b4f2467b9ac",
      "name": "Filter",
      "alwaysOutputData": true
    }
  ],
  "connections": {
    "When Executed by Another Workflow": {
      "main": [
        [
          {
            "node": "Filter",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  },
  "pinData": {},
  "meta": {
    "instanceId": "ad9cf2e17ebbbded2408dd876a4325e6ed035be4cece49d385a5040ddb1a5a1c"
  }
}

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.97.1
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system:

You can try the following; this should solve your problem:
Disable Always Output Data in the Filter node. The n8n documentation clearly explains that this option causes the node to return an empty item even if it doesn’t filter anything, which can lead to “ghost branches” that consume memory, especially within loops or sub-executions.

Add an extra node to consume only the desired branch. A common solution, recognized by the community, is to use an IF or Switch node right after the filter to explicitly discard empty items, especially when you use alwaysOutputData: true. For example:

Detect empty items by checking that Object.keys($items()[0].json).length === 0

Then, pipe only valid items to the next part of the flow. This allows you to keep the execution fluid without linking unwanted data back to the main flow.

Validate output in Execute Workflow within loops. For sub-executions with Execute Workflow or Execute Sub-Workflow, the official documentation states:

If Mode is set to Run once for each item, it will generate output for each loop, possibly including empty items if the sub-workflow doesn’t discard them. It’s recommended to validate afterward (using an IF statement, in the sub-workflow or in the parent workflow) to avoid accumulating empty objects that overload memory.

I found workarounds by using an if, I was just surprised by the behavior. And I think I confused things by the always output data as i don’t think I had that in my initial setup, but still got the 2nd branch output from the filter.

I was mainly wondering if there was a real use for this behavior that wouldn’t make more sense to use an if. To me the filter node discarded section should truly be discarded.

1 Like