Work with batches

Hello,

I have several workflows created that give me a series of errors that I don’t know what they are due to. Both have a similar configuration. They need to read a list of rows (in sheets or supabase) and for each of them make a request to an api, and what that api returns to host it in a column of the same row.

As there are many rows I have added a batch node so that it can work better, in batches of 100 rows.

The problem I usually have are two:

  1. sometimes it tells me that I don’t have enough memory, I don’t understand why.
    2- Sometimes the batches don’t work for me and only do one round.

I don’t know if I have something configured wrong, could you help me?

Thank you very much!

PS: I have n8n hosted in a digital ocean droplet with 1 GB Memory / 25 GB Disk.

Hi @fval90, I am sorry you’re having trouble.

What seems to cause the memory spike here is that despite using batches n8n will try to keep all of your data in memory during the workflow execution and eventually runs out of memory from the looks of it. n8n itself won’t monitor its memory consumption, but you can probably find related error messages in your server logs.

To avoid having n8n keep a large amount of data in memory you want to use sub-workflows, and then call these sub-workflows through the Execute Workflow node. Provided your sub-workflows use a final Set node at the end returning only a very small (or even empty) item, this should reduce the overall memory consumption as memory required by each sub-workflow execution would become available again.

You can also find more general advice on dealing with such situations on Memory-related errors | n8n Docs.

1 Like

Hello, thank you very much for your help
Is it necessary in this case to have a paid account?
I have tried with two workflows and I don’t know if I don’t know how to configure it properly or it doesn’t work.

Workflow 1

Workflow 2

I amend the previous message, the second workflow is launched, but no data is reaching it.

Hi @fval90, working with the Execute Workflow node and the corresponding trigger can be a bit tricky at first. The trigger will essentially use whatever data your parent workflow passes on to the Execute Workflow node, meaning you wouldn’t typically have any data to work with when building your workflow.

You could however insert suitable test data manually using n8n’s data pinning functionality. Simply copy the JSON data arriving on the Execute Workflow node in your parent workflow, then insert this test data on your child-workflow’s Execute Workflow trigger using the data editing functionality: Data editing | n8n Docs

1 Like

Thank you very much MutedJam for helping me.

But if I ran the first workflow with data, in the second workflow the data will appear, won’t it?

Is there any video or test workflow where I can test the operation? Or can you copy my workflow with test data

But if I ran the first workflow with data, in the second workflow the data will appear, won’t it?

Yes, that’s what would happen in production executions :slight_smile:

Once the execution of your parent workflow reaches the Execute Workflow node, all data it has at this point will be made available to the sub-workflows Execute Workflow trigger.

Is there any video or test workflow where I can test the operation?

I’d suggest you create two simplified test workflows to play around with this and get familiar with these nodes. Keep it really simple at first. For example, prepare a parent like this:

And a child workflow like this:

You can see how the parent receives one data structure which it then passes on to the child:

Once the child workflow has finished it will send back whatever it has on its last node.

Hi, I have been this weekend working on the workflow and letting it run a couple of nights to see if everything was going correctly.

On most runs the on second run that runs workflow 1 with the batch it stops and says it has memory problems. I don’t understand why.

The information from workflow 2 is not passed to workflow 1, is it? It only tells the 1 when the 2 has finished, is this statement correct?

Workflow 1

Workflow 2

In another workflow that also happened to me, it seems that the problem has been solved.

Workflow 1

Workflow 2

The only difference I see is in worflow two.
In the workflow where I have problems I had to pass another batch because the result is more than 2000 rows.
The only thing I can think of that I can do is to make a 3 workflow that those batches do not work in that workflow but in a third one.

Thank you so much!

1 Like

Not quite. Data would actually be passed on between workflows:

  • The Execute Workflow node sends all incoming data to the Execute Workflow trigger
  • Once your sub-workflow (with the Execute Workflow trigger) finishes, the main workflow (with the Execute Workflow node) will receive the data returned by the last node of your sub-workflow

You can see this in your execution list (provided you store execution data in the first place).

We still jave the same issue.

I have extended to a 3 workflow so that it is done more slowly but it ends up stopping the first workflow when it is about the 13th batch.

Workflow 1

Workflow 2

Workflow 3

I dont’t understand the issue. The first workflow, which does not have a lot of data, is the one that ends up stopping due to memory problems.

I can provide you with data for each workflow if necessary.

Thank you so much! :slight_smile:

Hi @fval90, your workflow 3 finishes with a “No Operation, do nothing” node which passes through all data it receives the parent. So with each execution of this sub-workflow the amount of memory required increases until the parent execution finishes.

You can reduce this memory consumption by adding a small Set node for example that writes a single field (something simple like {{ { "finished": true } }}) and makes use of the Execute Once option (so it returns only one instead of many such items):

But ultimately, some workflow executions might require more memory than others for the reasons outlined in the documentation linked further up. In this case you’ll need to closely monitor the memory usage of your n8n instance and allocate more memory as needed I am afraid.

1 Like

Hi!
I think it worked with that Set module :clap:
I’ll try a few days and write you over here with the results.

Thanks a lot for the help!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.