hello
i am trying to read data from snowflake view to supabase via n8n. there are 82250 rows in the view and i am trying to read in loop in batches of 4000 rows. there are around 26 columns.
after 3rd iteration the process suddnely stops..
how can i solve it? i need all of the columns.
Hi @amitkatzzadara Welcome!
I guess you are running into cloud memory and execution size limits in you n8n, i recommend decreasing the batch size to 500-1000 per iteration would be a better take, also make sure to not aggregate everything at once as it is again going to cause out of memory consider decreasing the batch size, let me know what works.
you are hitting the memory limit primarily because the Aggregate Results node is forcing n8n to hold all 82,250 processed rows in RAM until the entire loop finishes, rather than releasing them.
To fix this, you should delete the Aggregate Results node so that the memory used for each batch of 4,000 rows is freed immediately after they are sent to Supabase. If you are still running out of memory after removing that node, the most robust solution is to move the processing logic (the Supabase insert) into a separate Sub-workflow; this ensures that after every batch execution, the memory is completely wiped clean before the next batch begins.
i deleted the agg nodes and updated the code node to do this
const totalRows =$(‘bi_ops_equipment_total_rows’).first().json.TOTAL_ROWS;
return [{
json: {
table_name: ‘bi_physica_equipment_last_event’,
process_name: ‘bi_physica_equipment_last_event_sync’,
execution_timestamp: new Date().toISOString(),
rows_processed: totalRows, // Use your known batch size
status: ‘Ok’
}
}];