Workflow stops at random moments with status "Unknown"

Describe the issue/error/question

My workflow reads out an xlsx file with around 6000 lines, does some filtering and mapping on the data, and then sends it to Zoho Creator with API calls. When I run this local on my pc this works fine but when using the cloud version the workflows stops on random moments. (sometimes after 1600 records, sometimes 3000)

Is this because I’m using too much memory or is there something else I can do?

What is the error message (if any)?

In the execution list I can see status “Unknown”

Please share the workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database you’re using (default: SQLite):
  • Running n8n with the execution process [own(default), main]:
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]:

Hi @Marciano_Antonacci, welcome to the community! I am really sorry to hear you’re having trouble here.

We can’t see your workflow if you share the URL only unfortunately. Instead, you would need to share your workflow as described here (and then paste it on the forum) for others to see it.

That said, seeing you use n8n cloud I had a quick look at the internal logs we have for your n8n cloud instance. It seems it has hit an JavaScript heap out of memory error suggesting your workflow requires more memory to run than your n8n cloud instance has. As a result, n8n crashes during the workflow execution (and after restarting shows the unknown status for the workflows that were executed during the crash).

Without knowing your workflow it’s hard to suggest specific changes. However, assuming that reading the file itself works (and the crash happens later in your workflow), we can probably get it to work eventually.

First, it’s important to understand what increase memory consumption. Factors include:

  • Amount of JSON data
  • Size of binary data
  • Number of nodes in a workflow
  • Type of nodes in a workflow (the Function node specifically drives up memory consumption significantly)
  • Whether the workflow is started by a trigger or manually (manual executions increase memory consumption since an additional copy of data is held available for the UI)

Options to avoid the aforementioned problem include:

  1. Split the data processed into smaller chunks (e.g. instead of fetching 10,000 rows with each execution, process only 200 rows with each execution)
  2. Split the workflow up into into sub-workflows and ensure each sub-workflow only returns a limited amount of data to its parent workflow
  3. Avoid using the Function node
  4. Avoid executing the workflow manually

I suspect in your case it’s the amount of JSON data causing trouble here.

So you could try splitting up the data coming from your file using the Split in Batches node. Then, instead of processing all data in your main workflow, send each of your batches to a sub-workflow using the Execute Workflow node. The sub-workflow can then send your data to the Zoho API and only returns a very tiny dataset to the main workflow at the end.

Using this approach, memory would be freed up after each batch.

I know this was a bit much information, but I hope this still makes sense! Let me know if you need help with any of the steps above.

Hi @MutedJam,

Thanks for your quick response!
I edited my post so the workflow is now correctly shared. As you can see first of all I’m reading out the file and filtering this in a Function. I am not aware if there is a better way to filter the JSON items?

After that I’m using the splitInBatches node to split the items in batches of 200 (Zoho accepts max 200 records per API call). In another function I am mapping the 200 items into an array to send to Zoho. I am doing this because the Http request node sends one request per item while I want to send 1 requests 200 items in the body.

Would it help to move the ‘Map Zoho items’ and the ‘Send to Creator’ http request to a sub workflow? Or do you have other ideas to optimize this workflow?

Thanks in advance!

1 Like

Hi @Marciano_Antonacci, thanks for confirming!

Yes, I think that’s worth a shot. Right now your main workflow would keep all items in memory for every single node. So you could try creating a sub-workflow like this:

This would process the items in the current batch, then only return a single very small item to the parent and free up the memory used by the sub-workflow.

You parent can call the sub-workflow using a Execute Workflow node.