Workflow runs successfully in local but fails in cloud

Describe the problem/error/question

The workflow executes a http request to an API that returns 120,000 rows of data and inserts it into Google Big Query.
On local server it ran successfully.
Local server consumes max 1.6GB memory.

What is the error message (if any)?

msedge_XPiqMgeQ2h

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.8.2
  • Database (default: SQLite): n8n Cloud
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Cloud
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Cloud
  • Operating system: Cloud

Welcome to the community @Herman_Tan !

Yes the memory usage would be the reason. Cloud instances do have memory limitations. You would have to look to reduce the memory by processing less data by for example splitting it into batches. There are many posts about that in the forum.

2 Likes

Hi Jan,
Thank you for replying. Your work on n8n.io is amazing. Thank you for all the work.

The API end point generates 120,000 rows on 1 call and it fails on the HTTP Request node itself with out of memory error in the cloud.

image

Can the http request node send out the rows in batches of 10,000 rows?

Regards,
Herman

Thanks a lot. That is great to hear!

Sorry, I should have been more specific. Sadly, would it not fix this issue, as still all the data would end up in the workflow. The only way around it, would be splitting it into sub-workflows where each ones processes only part of the data. So you would have a main workflow that for example just inputs a single number for pagination 0, 1, 2, … into a sub-workflow. That sub-workflow would then only query the first batch, second batch, … and process that information and then return (best an empty item).

Arghhh, the API does not paginate… 50MB flying thru the internet in one response.
So looks like impossible to do?

Normally is almost nothing impossible, just more complicated :wink:

In this case, you could, for example, write out the response to multiple files with 1k items each and then have a sub-workflow read those files. Would be even better if it would not be a sub-workflow, but rather another main workflow that runs totally independently (for example, HTTP Request → Webhook), so that the whole memory gets freed up.

I have updated the http request options as follows
image
The step does not break.
The output is now
image
How do I proceed to create the next step to break up the json?

Jan,
I am now on a starter plan


Will upgrading to the next tier provide larger memory in the instance?

Jan,

I added the write binary text to file.

I have tried writing binary to a file but the cloud account does not allow writing to file.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.