(SplitInBatches) Workflow execution process did crash for an unknown reason!

Dear Community,

I created a workflow for splitting the data from oracle into multiple batches and made actions(for example insert data into Strapi) batch by batch. There are two customized nodes in the workflow:

  1. Oracle
    Run query in Oracle DB
  2. Pagination
    Output the offset values using Total Count and Count per page

The expected behaviour for the workflow should be:

  1. Get the total count from Oracle
  2. Put the total count to the Pagination node
  3. Using the Split in Batch node, we split the items from the Pagination node one by one, and perform a query in Oracle using the offset field

However, the workflow cannot be executed properly with an error saying Workflow execution process did crash for an unknown reason.
Screenshot 2021-09-03 at 6.02.32 PM

I found some issues related to this error message, saying that it could be a memory issue. However, only 50 rows are fetched in this example, I am sure that it cannot be a memory issue.

This workflow works fine in an environment having only the main process. The issue happens only in the environment with two workers and two webhook workers.

Device Spec:
16 Core CPU
32GB Ram

Thanks a lot!

Hey @savina_kau!

This can happen due to multiple reasons. Do you see any logs? If you don’t have logging configured, can you please set it up (you can follow the documentation here: Logging in n8n | Docs). The logs will help us get a better idea of what might be causing the error.

Also tagging @krynble for his inputs.

Important to mention, that if you run a workflow manually it runs not on a worker, rather only on the main service. Considering the error message you get, it seems like you run it in “own” mode. So what happened is that the process that did execute that execution did crash (that is why it is an unknown reason). The only reason why it normally crashes is a memory issue. Are you 100% sure it can not be it? Esp. considering the above information?

I believe this is probably an error with the memory.

Although you are handling only 50 items at a time, the whole execution will process all of the items, and this needs to be stored in a single big object in memory, as n8n needs to save it to the database.

The problem is that the execution history (with all the items processed) grows and needs to be serialized and deserialized. This is probably causing your system to crash.

I would recommend splitting this workflow into smaller parts so it can easily run without memory problems. This would require you to create a few different workflows.

Workflow A:

  • Query Oracle to get the item IDs only. Split in batches and use Execute Workflow to send each batch to a separate workflow

Workflow B:

  • Based on the provided IDs from workflow A, get the full data from Oracle, works on it and you should add one last Set node, clearing all data and simply setting a simple return value, such as
{
    "success": true
}

The result from workflow B is passed back to workflow A as part of the result. This will not cause any problems as you’ll have the following scenario:

  • Workflow A contains only a list of IDs and a simple success output from workflow B for each iteration
  • Workflow B contains multiple items but it’s fine as it was split in batches

With this, you should not have any memory issues I believe.

3 Likes

Thanks for your reply! For now, when I use (npm run dev) to start n8n, it works fine and can process the whole workflow smoothly. But it has an issue when I used docker to run n8n. Do you know what is the problem with it? Thanks!

Hey @savina_kau!

Are you getting the same error or is it a different error?