Workflow Memory Error Due to Image Size - Practically Understanding Cloud Data Management

I’ve built a few workflows, but I’m not a Developer.

My workflow is extracting images attached in emails and uploading them to Drive using the Google Drive node.

Ran into the error below for the first time.

My understanding is the binary data from the images is too large?

I’m on the starter plan with cloud storage and can upgrade to Pro, but I want to ensure that will fix the issue first.

I don’t know how to see how much memory my current plan has or has left or if there are daily limits and this will work if I run it tomorrow etc…

Is the answer just getting access to more memory?

Any plain language help is greatly appreciated.

Here you can see the resource limitation of the cloud plans:

Thanks @jabbson. I did see this and review it before posting.

Here’s why I still posted after reviewing.

  1. I don’t understand any of these measurements (what’s a millicore etc…)
  2. Even if I did, I don’t have answers about the greater context of how to tell if I’m hitting this memory limitation across all my workflows, or if there’s a limitation I’m hitting just with this one. If so, how do I diagnose that so I can ensure the proper fix etc…

I understand the theory of memory limitations, but I don’t see anything about how to practically review my usage by workflow or account to understand where I am in relationship to these limitations.

I don’t want to upgrade to Pro just to learn I’m still hitting errors and limitations.

In your workflow, do you loop over attachments and save them to Drive or do you collect all of them at once?

It loops to only process one email at a time, but that email may have several images to upload to drive.

The attempt that keeps getting the error has 5 images with each image around 3mb in size.

Oddly the google node executes fine, it’s the Agent node that is active when the “connection is lost” and the error pops up.

Thanks again for the help!

You could try to iterate over attachments too, this may help, but I am not sure if this is going to be enough

I was just about to post on this same subject. There doesn’t seem to be any way to gauge or monitor resource usage on the cloud plans.

I’m interested in the technique that @Jim_Le presents here regarding ephemeral databases but I don’t want that to become the cause of resource issues.

Can anyone advise if/how it’s possible to check the instance memory status, ideally during runtime? It doesn’t look like the interface or the API supports it.

Feel free to +1 this FR:

3 Likes

Done! :innocent:


1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.