Google Sheets: ERROR: Maximum call stack size exceeded

Google Sheets memory issue on local machine

When trying to retrieve a big google sheet I get back the error “ERROR: Maximum call stack size exceeded”. Now when doing this same action within the cloud environment, it doesn’t provide me this error.

I know there are multiple topics about this,

I don’t see anything documented (or I am stupid) inside the Memory Issue page?
I have already tried using an environent variable within the docker container NODE_OPTIONS=--max-old-space-size=8192 but that doesn’t seem to help either.
What do I need to modify to make sure this memory limit is not reached?

Please share your workflow

Issue screenshots


Error


Memory allocation


Environment vars

Information on your n8n setup

  • n8n version: 1.8.2
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker desktop
  • Operating system: Macbook Pro M1

Hope you can provide me with some more information, thank you!

Hey @Jon have you got any magical ideas I could try?
Would be highly appreciated!

Thank you

Hey @drivn,

Welcome to the community :tada:

I have nothing at the moment, I thought we actually fixed a lot of those. As a temporary solution you could try setting something like the below to see if it helps.

NODE_OPTIONS=--stack-size=1800

How many records do you have in your Google Sheet as well that might help us reproduce this one.

1 Like

I think this might be an issue in how array de-structuring and function arguments work in javascript, and we might need to break this .push call here into chunks.

Since I don’t have access to your spreadsheet to debug this, I’ve pushed a custom docker image that I believe should fix this issue.

If you can please pull this custom docker image n8nio/n8n:fix-max-call-stack, test this for us, and let us know if this fixes the issue for you, we can create a proper pull-request, and get this fixed before the next release.

4 Likes

Great stuff!
That seems to have worked, let me do some more (intensive) testing and I will give back the results

2 Likes

@Jon

By adding this into the command line

docker run -it --rm \
 --name n8n \
 -p 5678:5678 \
 -e NODE_OPTIONS="--stack-size=1800" \
 -v n8n_data:/home/node/.n8n \
 docker.n8n.io/n8nio/n8n:fix-max-call-stack

when starting the docker image, the following error happens:

node: --stack-size= is not allowed in NODE_OPTIONS

@netroy ,
So it does seen to work, I also removed the NODE_OPTIONS="--max-old-space-size=8192" which doesn’t seem to affect it. The google sheet module runs, however the workflow still crashes after a while with a memory error while there still is enough memory to use. How does this happen?



THank you!

Hey @drivn,

When you get the error what error appears in the log? It looks like this is no longer the original issue which is handy.

Can you also share the workflow, google sheet and spreadsheet you are working with so we can test it?

We did recently make some tweaks to improve the spreadsheet node but I suspect we need to make a tweak to how we write data as well.

Hi @Jon and @netroy ,

In the n8nEventLog.log I can see the following:

{"__type":"$$EventMessageWorkflow","id":"a625cf92-2cd9-4b74-a83c-2842a3ee362e","ts":"2023-09-30T22:48:51.372+02:00","eventName":"n8n.workflow.failed","message":"n8n.workflow.failed","payload":{"executionId":"44","success":false,"workflowId":"pZLcJ1gvxGUOle0G","isManual":false,"workflowName":"ExecutiveGroup - Check bounces & remove subscribers","lastNodeExecuted":"Spreadsheet File1","errorNodeType":"n8n-nodes-base.spreadsheetFile","errorMessage":"Workflow did not finish, possible out-of-memory issue"}}

Here you have the workflow once more:

Unfortunately I can not share the sheets or CSV files with you, since there is personal data in there. Here is a mockup (which is very tiny compared with the 117851 rows in gsheet) : Mockup Data - Subscribers - Google Tabellen

The CSV file is similar, but it holds 7312 rows (that we want to delete from the original gSheet).
Hope this gives you enough data !

Any idea @Jon / @netroy ?
Should I try another way?

Hey @drivn,

Does the error occur if you use the sample file you shared?

This sounds like a completely different issue from what this thread was started for.

Just to update you: We’ve just merged the fixed for the Maximum call stack size exceeded error, and that should be included in the next release on Wednesday.

New version [email protected] got released which includes the GitHub PR 7384.

1 Like

Hey @Jon , no it runs when I use a smaller file (the mockup sheet I shared).
So seems there is still some memory errory somewhere.

How many records does it take to hit the error? Ideally we would need something that reproduces and takes away the guess work.

So I did a few tests to see how it would respond;
80k rows, 100k and the original data 118k rows. Apparently it does work, but it takes incredibly long…
When checking the stats of the docker container it also never uses more than 1ish GB

@Jon Why is it taking this long? Why is it not using more memory to hopefully speed up the process?

It seem like the last one took so long it might also be something else (my connection maybe), but im unsure about that.

Hey @drivn,

Using more memory won’t help as the network speed and response time from Google is not impacted by that.

It could just be that it takes that long to get the responses back 118k rows is a fair amount of data that needs to be collected although 174 minutes is also a long time so it could be the requests were limited in some way.

Hey @Jon , so the issue is not the google sheet data retrieval or the comparison of data; that is actually pretty fast (18s max)


All of the time sits within the last 2 modules, where its writing the file. Any idea why this is taking so long?

Hey @drivn,

Without seeing where it is going slow it is hard to say, I can only base ideas on the information you are providing so the more you can give us the better the ideas will be :slight_smile:

Can you share the full output from one of the longer runs?

in the Spreadsheet node where you are writing to a file, are you writing the data as CSV, or as XLS ?

Its a CSV @netroy .

It’s still the same workflow as before in the earlier posts :slight_smile: