Collecting data from multiple Baserow tables for further processing within n8n

Hi.

I am using self-hosted Baserow and self-hosted n8n.

I want to retrieve Baserow data in n8n and eventually build a GeoJSON/GPX out of it. However the data in Baserow is stored across a few tables, because it makes sense on that side to split it like such. So I cannot simply fetch an entire table then process it linearly.

I’ll need to fetch one full table (buildings), find all references to linked tables (main entrance row id in main entrance table, contact person row id in contact person table), poll linked tables for said linked records, go through the initial buildings result and iterate through each item, replacing the row id references by actual data.

Now… I have no clue how to actually do that in the n8n world. I am somewhat comfortable in Python but this is very different.

It feels like I need to approach every node as taking the previous node’s results as input? A quick online search lets me think that I cannot store results like I would create a basic variable in a standard programming language?

I am very confused and I would love to get some insight as to how to approach this.

Thank you.

Sure look, the n8n pattern for this is: fetch the buildings table first, then use a Loop Over Items node to iterate each building record, make HTTP Request calls for each linked table inside the loop, and use a Code node to merge the results before the loop exits.

Rough workflow shape:

  1. HTTP Request → GET your buildings table (Baserow API: /api/database/rows/table/{id}/)
  2. Loop Over Items (splits buildings into individual rows)
  3. Inside the loop: HTTP Request → GET the linked main_entrance row using the ID from the building record (/api/database/rows/table/{entrance_table_id}/{entrance_row_id}/)
  4. Inside the loop: HTTP Request → GET the linked contact_person row similarly
  5. Code node to merge: return [{ json: { ...items[0].json, main_entrance: items[1].json, contact: items[2].json } }]
  6. The loop collects the merged output from each iteration

The Code node at step 5 runs with all three requests’ outputs in scope, you access them via $input.all() or $items() depending on your n8n version.

For the GeoJSON output at the end, another Code node after the loop builds the FeatureCollection from all the merged rows. Since you’re comfortable in Python, the JavaScript in Code nodes is basically the same logic, objects instead of dicts, map() instead of list comprehensions.

One Baserow-specific note: the linked field values in a row come back as an array of objects [{id: X, value: "..."}] by default, so you already have the row ID without an extra step.

Thank you.

Late last night I managed to achieve something a bit differently:

  1. Baserow node get many for the buildings table. In parallel, Bsaerow node get many for the entrances table.
  2. Then a merge node to do a left join.
  3. Then I’ll need to continue my work.

In your suggestion I could have a batch call (“get many”) for the first poll only, but then I’d have individual API calls for each entrance that needs to be fetched, which would take a long time. Possibly there would be a way to keep stuff stored somewhere to prepare batch calls, but the parallel approach that I’ve tried (and that works) feels a lot easier to “code” and easier to read.

Why do you suggest a code node to merge, when there is a merge node?

I don’t really understand your comment about “the code node at step 5 runs withs all three requests’ outputs in scope”. Do you mind expanding a bit on this, please? You mention $input.all() or $items() with () which suggests a function, however in your step 5 you refer to list indices items[x], which I find confusing.

Thank you!

yeah the parallel approach is definitely cleaner — way fewer api calls that way. merge node should handle it fine for what youre doing