When trying to retrieve a big google sheet I get back the error “ERROR: Maximum call stack size exceeded”. Now when doing this same action within the cloud environment, it doesn’t provide me this error.
I don’t see anything documented (or I am stupid) inside the Memory Issue page?
I have already tried using an environent variable within the docker container NODE_OPTIONS=--max-old-space-size=8192 but that doesn’t seem to help either.
What do I need to modify to make sure this memory limit is not reached?
I have nothing at the moment, I thought we actually fixed a lot of those. As a temporary solution you could try setting something like the below to see if it helps.
NODE_OPTIONS=--stack-size=1800
How many records do you have in your Google Sheet as well that might help us reproduce this one.
I think this might be an issue in how array de-structuring and function arguments work in javascript, and we might need to break this .push call here into chunks.
Since I don’t have access to your spreadsheet to debug this, I’ve pushed a custom docker image that I believe should fix this issue.
If you can please pull this custom docker image n8nio/n8n:fix-max-call-stack, test this for us, and let us know if this fixes the issue for you, we can create a proper pull-request, and get this fixed before the next release.
when starting the docker image, the following error happens:
node: --stack-size= is not allowed in NODE_OPTIONS
@netroy ,
So it does seen to work, I also removed the NODE_OPTIONS="--max-old-space-size=8192" which doesn’t seem to affect it. The google sheet module runs, however the workflow still crashes after a while with a memory error while there still is enough memory to use. How does this happen?
{"__type":"$$EventMessageWorkflow","id":"a625cf92-2cd9-4b74-a83c-2842a3ee362e","ts":"2023-09-30T22:48:51.372+02:00","eventName":"n8n.workflow.failed","message":"n8n.workflow.failed","payload":{"executionId":"44","success":false,"workflowId":"pZLcJ1gvxGUOle0G","isManual":false,"workflowName":"ExecutiveGroup - Check bounces & remove subscribers","lastNodeExecuted":"Spreadsheet File1","errorNodeType":"n8n-nodes-base.spreadsheetFile","errorMessage":"Workflow did not finish, possible out-of-memory issue"}}
Here you have the workflow once more:
Unfortunately I can not share the sheets or CSV files with you, since there is personal data in there. Here is a mockup (which is very tiny compared with the 117851 rows in gsheet) : Mockup Data - Subscribers - Google Tabellen
The CSV file is similar, but it holds 7312 rows (that we want to delete from the original gSheet).
Hope this gives you enough data !
This sounds like a completely different issue from what this thread was started for.
Just to update you: We’ve just merged the fixed for the Maximum call stack size exceeded error, and that should be included in the next release on Wednesday.
So I did a few tests to see how it would respond;
80k rows, 100k and the original data 118k rows. Apparently it does work, but it takes incredibly long…
When checking the stats of the docker container it also never uses more than 1ish GB
Using more memory won’t help as the network speed and response time from Google is not impacted by that.
It could just be that it takes that long to get the responses back 118k rows is a fair amount of data that needs to be collected although 174 minutes is also a long time so it could be the requests were limited in some way.
Without seeing where it is going slow it is hard to say, I can only base ideas on the information you are providing so the more you can give us the better the ideas will be
Can you share the full output from one of the longer runs?