Dropping input data before next node execution

In my workflow I need to execute a node without regard to the previous node and the input data.

More specifically I am pushing data into an airtable table. I then want to turnaround and read from the same table (with some filter criteria). However, because the previous node has input data, it’s running the search node once per input item.

In this diagram at Point A you can see where I’ve done an insert of 47 items into airtable. Then at Point B I want to read data from the same table but it’s executing the node 47 times because of the input data, returning a “fake” 2209 items (47 x 47).

How can I structure this so that after the upsert, I can “ignore” input data and just do a clean read of the table as my new input data for the next node?

I tried splitting it into two different workflows but then the execute workflow passes on the input data and I have the same issue.

Information on your n8n setup

  • n8n version: 1.56.2 (hosed on n8n)
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via n8n cloud
  • **Operating system: mac **

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I always use a limit node to limit items to 1 and if needed a set node to clear the data from this one item.

you could also use a code node which just returns one empty item.

I am also interested on other ways to achive this which maybe are more elegant.

Hi @FelixL thanks for the suggestion. I wasn’t aware of the limit node and in my case that will work by setting it to 1, because the input data set is irrelevant for the Airtable search node and setting it to 1 at least causes it to be only executed once. So thanks for that.

I agree though I’d still be curious on what the “right” workflow design should be to get around this issue.