Collect/Merge items data with unwalked input

Hello guys!

How do I merge data, when one of the inputs is not taken/walked by the workflow?
I have a Function Node (1) that returns items. Based on a Switch Node (2) I route what is going to happen next. After the routing I want to collect/merge (3) my items again.

The problem is: if my data source only has 1 type of item the merge does not occur since only one input is populated.

I tried the “Always Output Data” on the Switch, SQL and Keep Source Merge Node but I can not get this to work.

How would I go about that? I thought of using a function not as areplacement for the Merge (3) but it seems to be only be triggered by either the top route or the bottom route.

Regards Sebastian

First one question. Are you on the latest n8n version?

Im on 0.107.0
image

Is it because of this from the changelog?:

  • Added the functionality to optionally save execution data after each node

Ah no then it should be fine. We changed the execution behavior multiple versions ago and I thought with those changes it should actually work already. Have to have another look.

As a workaround, you can try to add another Merge-Node between “Keep Source Merge” and “Merge2”. Set to mode “Pass Through” for “Input 2” and connect Input 1 with the Switch-Node (output 0) and Input 2 with Keep Source Merge.

I tried your solution but I do not get any data passed through the pass-through merge node. Here is a minmal repro:

I think I may have found a solution:

I have 2 emitting functions. one emits data that is of the same type, the other is of different types. I use the Set Node to add a new property dummyData with value false to both routes. But when it travels the top route it a) sets dummyData to true and also triggers the bottom route via a pass-through merge. the passed through input data is evaluated based on the dummyData property once again and if it is true it forwards-the dummy data, else it forwards the “good” data. After both branches I merge the 2 route via append and filter out all items with dummydata: true.

Pretty cumbersome imho but it seems to do the trick.

Edit: well in my bigger scenario it does not work like that. seems to be a race condition since the resolving on the database actually takes a while but preppying the dummyData is instant. :thinking:

@vuchl did just push a fix. If you pull in around 10 minutes the docker-image n8nio/n8n:nighly you can check if it works in your bigger scenario.

For clarification: How do I get the JSON from 🐛 Execute node also if it is a sibling but does not receive data · n8n-io/n8n@c811294 · GitHub into my n8n for testing?

Edit: Holy smokes. It seems to work now. I can not follow the code of the commit completely but it seems to do the trick. What is the recommended Was to collect items now?

So you were able to get the code?

What do you mean with the recommended way to collect the items?

I pulled the nightly and updated my docker container.

What would you now recommend to make the parallel execution after the switch work? The workflow with the pass-through and append merge and some flag indicating which data to keep? I can’t seem to paste the JSON from the test case to look into it.

The copy&paste was because of trailing commas. I’ll check it out

Sadly still not sure if I understand. The idea behind the fix is that nothing is needed, that it simply works as expected.

OK. Then I guess I’ll take it as it is. Thanks Jan!

Another question: could this fix have introduced som eproblems with additionally installed npm packages?

My Dockerfile looks like this:

FROM n8nio/n8n:nightly

RUN apk --no-cache add libxml2-utils libxslt

RUN npm i -g super-xmlllint
RUN npm i -g xslt-processor
RUN npm i -g lodash

My docker-compose.yml looks like this:

version: "3"

services:
  n8n:
    image: n8n-xml
    restart: always
    ports:
      - "5678:5678"
    environment:
      - N8N_BASIC_AUTH_ACTIVE=false
      - N8N_HOST=redacted
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - NODE_ENV=production
      - WEBHOOK_TUNNEL_URL=redacted
      - NODE_FUNCTION_ALLOW_EXTERNAL=lodash,super-xmlllint,xslt-processor
      - NODE_TLS_REJECT_UNAUTHORIZED=0
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./.n8n:/home/node/.n8n

But my Workflow now throws this error later in the pipeline:

{
  "execution": {
    "id": "293",
    "url": "redacted/execution/293",
    "error": {
      "message": "Cannot find module 'xslt-processor'",
      "stack": "VMError: Cannot find module 'xslt-processor'\n    at _require (/data/node_modules/vm2/lib/sandbox.js:368:25)\n    at /data/packages/nodes-base/dist/nodes:1:132\n    at Object. (/data/packages/nodes-base/dist/nodes:10:14)\n    at NodeVM.run (/data/node_modules/vm2/lib/main.js:1121:29)\n    at Object.execute (/data/packages/nodes-base/nodes/Function.node.ts:87:22)\n    at Workflow.runNode (/data/packages/workflow/src/Workflow.ts:981:28)\n    at /data/packages/core/src/WorkflowExecute.ts:657:41"
    },
    "lastNodeExecuted": "cXML zu SaleOrder XSLT",
    "mode": "trigger"
  },
  "workflow": {
    "id": "15",
    "name": "EDI5"
  }
}

During the update of the Docker container it seems fine:

as18:~$ ./update-n8n.sh
nightly: Pulling from n8nio/n8n
Digest: sha256:623a7671b766f9858997c4021dd5723e3a03f898fe245d9450175a83ffb703c7
Status: Image is up to date for n8nio/n8n:nightly
docker.io/n8nio/n8n:nightly
Sending build context to Docker daemon  117.2MB
Step 1/6 : FROM n8nio/n8n:nightly
 ---> b970f89bf38a
Step 2/6 : RUN apk --no-cache add libxml2-utils libxslt
 ---> Using cache
 ---> f5bada4a7be1
Step 3/6 : RUN npm i -g super-xmlllint
 ---> Running in cf631215b391

> [email protected] postinstall /usr/local/lib/node_modules/super-xmlllint
> node ./lib/validate-xmllint-installation.js

xmllint has been located. Ready to validate xml.
+ [email protected]
added 1 package from 1 contributor in 0.647s
Removing intermediate container cf631215b391
 ---> 5e71a4d9b5cd
Step 4/6 : RUN npm i -g xslt-processor
 ---> Running in 40c3e043b658
+ [email protected]
added 2 packages from 3 contributors in 0.705s
Removing intermediate container 40c3e043b658
 ---> 591510521c16
Step 5/6 : RUN npm i -g lodash
 ---> Running in 1344a30e2318
+ [email protected]
added 1 package from 2 contributors in 0.762s
Removing intermediate container 1344a30e2318
 ---> 34ce2761e259
Step 6/6 : RUN npm i -g node-fetch
 ---> Running in 35ce0ffc94e1
+ [email protected]
added 1 package from 1 contributor in 0.307s
Removing intermediate container 35ce0ffc94e1
 ---> 033a5447d0c3
Successfully built 033a5447d0c3
Successfully tagged n8n-xml:latest
Stopping it_n8n_1 ... done
Going to remove it_n8n_1
Are you sure? [yN] y
Removing it_n8n_1 ... done

I also can use stuff from lodash like set and sortBy

No, it is impossible that this fix would make any difference in that regard.

The nightly build docker images is however slightly different than the default image as it does get build totally differently. Can also not imagine that this should cause any problems here, but if it worked before and now not, this is the only thing I can think of. Maybe you would then have to wait for the proper release next week.

Got released with [email protected]