Hi
I’m trying to send data from a snowflake DB to google sheets. It’s working fine, when the result from my SQL SELECT query is not to large. As soon as the results get larger (>200.000 rows) I get the following error:
RangeError: Maximum call stack size exceeded
at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Snowflake/Snowflake.node.js:163:33)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/src/WorkflowExecute.js:454:47
It is going to be tricky, Because n8n does everything in memory it is going to be a case of increasing the memory available to the container / node process so it could need something in Docker unless Caprover has an option.
The downside to this though is you would probably need to keep increasing it to find the value that works for you and eventually you may hit the limit for memory you have available. I know you don’t want to do it but it might be worth starting with splitting your workflow up.
So my suggestion here (seeing <200K rows are working) would be to implement some pagination logic similar to what @jon suggested and what you have already done from the sounds of it.
Perhaps you could share what exactly you’re struggling with here? We could then build a workflow template to make this process easier for other users.