"Maximum call stack size exceeded" when getting large result from Snowflake DB

Hi
I’m trying to send data from a snowflake DB to google sheets. It’s working fine, when the result from my SQL SELECT query is not to large. As soon as the results get larger (>200.000 rows) I get the following error:

RangeError: Maximum call stack size exceeded
    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Snowflake/Snowflake.node.js:163:33)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/src/WorkflowExecute.js:454:47

I’m running n8n von a cloud-server with a CapRover 1-click docker-image. Sound’s like some kind of memory limitation.

Any idea how to fix this (besides splitting up the result in multiple parts)?

Hey @masterofweb,

Welcome to the community :rocket:

It could be a memory thing, the best approach there would be to split the workflow up into smaller batches.

If you check the docker log does it show any other errors?

No, there is only the log for init

Initializing n8n process
n8n ready on 0.0.0.0, port 5678
Version: 0.177.0
Editor is now accessible via:
…

I don’t even know on what level I should start troubleshooting. OS, Docker, Caprover, n8n?

Hey @masterofweb,

It is going to be tricky, Because n8n does everything in memory it is going to be a case of increasing the memory available to the container / node process so it could need something in Docker unless Caprover has an option.

The downside to this though is you would probably need to keep increasing it to find the value that works for you and eventually you may hit the limit for memory you have available. I know you don’t want to do it but it might be worth starting with splitting your workflow up.

@MutedJam do you have any thoughts?

Hi
yes, already build a workflow to import only 1k rows at a time. Was a lot more difficult (for a n8n noob like me) than i thought.

Still, it would be helpfull for me and others to now how to fix this problem.

So my suggestion here (seeing <200K rows are working) would be to implement some pagination logic similar to what @jon suggested and what you have already done from the sounds of it.

Perhaps you could share what exactly you’re struggling with here? We could then build a workflow template to make this process easier for other users.