Is it possible with n8n that

The idea is:

In case of large data handling, data is read & written in chunks.

My use case:

For example I have a recordset of 1000 records to be synced. For that an API is called at an interval of 5 mins and rows of data is read & written at a skip count of 10 rows. In the first call row no. 1 to 10 is handled, then 11 to 20, then 21 to 30 and so on.

I think it would be beneficial to add this because: Handling large data will be executed smoothly.

This feature will prevent stucking / crashing of app while handling large data

Any resources to support this?

Are you willing to work on this?

Hi @Fuzonmedia_Developer

Welcome to the community!

This is what the split in batches node is for. :slight_smile:

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.