The idea is:
In case of large data handling, data is read & written in chunks.
My use case:
For example I have a recordset of 1000 records to be synced. For that an API is called at an interval of 5 mins and rows of data is read & written at a skip count of 10 rows. In the first call row no. 1 to 10 is handled, then 11 to 20, then 21 to 30 and so on.
I think it would be beneficial to add this because: Handling large data will be executed smoothly.
This feature will prevent stucking / crashing of app while handling large data