Hello n8n team,
I’m running into an issue with one of my workflows and would appreciate some guidance.
Workflow Overview
I have a workflow that automates media clipping management. It monitors a Gmail inbox, extracts PDF clippings and JPEG images, parses metadata (publication, date, medium), and organizes everything in Google Drive.
It renames files dynamically, sorts them by calendar week, updates a Google Sheet across multiple tabs (ONLINE / PRINT / SOCIAL / OTHER), and uploads files into a folder structure (Customer → LV → CW##).
Everything works fine for PDFs and smaller image batches.
The Problem
The issue appears only with JPEG files because the client sends them inside ZIP files. These ZIPs range from ~19 MB to 70+ MB.
JPEGs cannot be pulled directly from the email - the ZIP must be extracted first.
This extraction step fails with memory limit errors.
We were originally on the Starter plan and assumed the limit was the cause. After upgrading to Pro, the problem still persists. Smaller ZIP files work, but larger ones always crash. I’ve attached the error log.
I asked the n8n AI for advice, and it suggested that the issue is likely tied to Cloud memory limits and recommended contacting support, since the ZIP files cannot be split or modified before processing.
My Questions
-
Is it possible to get more memory for this workflow or for the Compression/Extract node on the Pro plan?
-
If not, what’s the recommended way to process ZIP files in the 20–70 MB range on n8n Cloud?
-
Is there a known workaround or best practice for handling large ZIP extraction on Cloud?
If any additional logs or workflow details are needed, I’m happy to share them.
Thanks in advance for your help - I’d really like to get this working reliably.
Greetings,
Stefana