Reducing number of API calls by grouping the incoming data to avoid workflow crashing

Describe the issue/error/question

So, I am running an API call to get all the products sold from an online e-commerce store (output: roughly 2,500 items or products). Now this API call output doesn’t have all the product details I need (ex. product category, tax category, and producer name), rather it has fields like product category id, tax category id, and producer id. I have to use these ids to get respective field names from Airtable. Therefore, I am running a custom Airtable API call (not the Airtable List module, as it only runs once, not for every item) to get the matching name from airtable using the ids.
After I get all the info from Airtable, then I will need to create a CSV with all the product details.

But due to the sheer number of items in workflow, it takes too long to process all the items (~30 mins-which is okay) and most of the times the workflow crashes or the status becomes unknown (“The workflow execution is probably still running but it may have crashed and n8n cannot safely tell”).

Not sure if its due to n8n or the server we are using for n8n.
I am throttling the Airtable API call requests to 10 req/3 sec (airtable api has a limit of 5 req/sec).

Now, the thing is, we only have ~30 product categories and ~5-10 tax categories, so it doesn’t really make sense to run Airtable API calls for all 2500 items individually.
Note: The category names may change over time and more categories can be added, so I am avoiding hard-coding a switch/function module for this.
Also, I tried using loops with Airtable lists, but it didn’t workout.

What is the error message (if any)?

Unknown: The workflow execution is probably still running but it may have crashed and n8n cannot safely tell.

Please share the workflow

For the sample workflow, I am only sending 20 items with only 1 field called product_category_id. It checks/matches the product category id for all incoming 20 items with airtable data using API call. But overall there are only 6 different categories used in the sample data. So, I would like to run the Airtable API only 6 times rather than for all the items and add the name of 6 product categories into all the 20 items based on the category id. Thus, when I use SET or create a CSV from the items, I have all the 20 items with product category id and category name.

Additionally, in the actual scenario, I would have to do separate API calls for each id field (product category, tax id, and producer id) and then combine everything together to get all the product details.

Share the output returned by the last node

Information on your n8n setup

  • **n8n version:**0.199.0
  • Database you’re using (default: SQLite):
  • Running n8n with the execution process [own(default), main]:
  • **Running n8n via [Docker, npm, n8n.cloud, desktop app]:**Hosted on a server

Hi @Divyansh, I am sorry to hear you’re having trouble.

Grouping items is tricky in n8n as most nodes including the HTTP Request node will apply their logic to each individual item. So, I’d suggest avoiding this unless there is no other way to make a workflow work.

The status you have seen often suggests a workflow execution requires more memory than available. It’s worth checking the server logs for any additional indicators of that if you want to be sure about this one.

With this in mind, if your Item Lists node runs successfully you might want to consider splitting up your data into smaller batches afterwards using the Split in Batches node. Then, hand each batch over to a sub-workflow via the Execute Workflow node and make sure each sub-workflow only returns a very small dataset (something like { "success": true } or even an empty item instead of the full response data).

This sub-workflow approach means that the memory required for each batch is freed again once each sub-workflow execution has finished.