Google drive node fails to download images of size > 1.5 MB

Describe the issue/error/question

I made a workflow which would download an image from google drive using the fileID and then upload to AWS S3. It seems to work fine with the smaller sized images (~ 1 MB) but if the image size is > 1.5 MB, then the workflow keeps running and eventually crashes. The execution log show the status as UNKNOWN there after.

What is the error message (if any)?

Please share the workflow

(Select the nodes and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow respectively)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database you’re using (default: SQLite):
  • Running n8n with the execution process [own(default), main]:
  • Running n8n via [Docker, npm,, desktop app]:

Hi @Anubhav_Dubey, welcome to the community and sorry for the trouble.

This sounds like your workflow execution consumes more memory than available to your n8n instance. Are you self-hosting n8n or are you using

Hi @MutedJam

I am using How can I check / increase the memory allowed to my n8n instance

Hey @Anubhav_Dubey, many thanks for confirming! Unfortunately, I could not locate an instance under your forum email address.

On a very general level, the following factors increase the memory consumption:

  • Amount of JSON data
  • Size of binary data
  • Number of nodes in a workflow
  • Type of nodes in a workflow (the Function node specifically drives up memory consumption significantly)
  • Whether the workflow is started by a trigger or manually (manual executions increase memory consumption since an additional copy of data is held available for the UI)

At the moment, there are the below options to avoid the aforementioned problem:

  1. Increase the amount of RAM available to an n8n instance (this applies to self-hosted instances only, on it would require upgrading to a larger plan)
  2. Split the data processed into smaller chunks (e.g. instead of fetching 10,000 rows with each execution, process only 1,000 rows with each execution)
  3. Split the workflow up into into sub-workflows (e.g. instead of having your data pass 50 nodes in one workflow, have it pass 10 nodes in 5 workflows each)
  4. Avoid using the Function node
  5. Avoid executing the workflow manually

However, particularly for problems around binary data we could try and use the approach described recently in this thread and configure your cloud instance to store binary data on the filesystem rather than keeping it in memory.

Could you reach out to support via the help center at | Help center? There should be a contact form at the bottom. This will help me identify the right cloud instance for the change.

1 Like