I get a lot of 503 error lately

Lately, I’ve been working on a workflow that handles a decent amount of data. It worked fine a week ago, but now I can’t get it to pass the second node. It hangs until it says ‘no internet connection,’ and then I get a 503 error.

What can I do about this?

Just to clarify, the full workflow was working perfectly a week ago (including the parts that are currently disconnected).

Using the basic plan of n8n.

1 Like

Hi @Yogev_Meckler Welcome
Please provide more information about your workspace and n8n version. As far as i can say 503 is always linked to service when it goes down under heavy load, consider upgrading your VPS or N8N Cloud plan to make sure this does not happen.

Sounds like your server is running out of RAM.

How are you hosting it? Is it a vps? What provider and what specs? Does it happen with other flows too or only this one? Can you share some logs?

I am using n8n via the n8n cloud, the basic version. I added a photo of crash while working on the workflow

the thing is that the workflow worked before, I downgraded the workflow to make it easier to run so I wont get this message, but it still happens.

@Yogev_Meckler connection lost is a little problem although it mostly occurs when the usage is high, consider upgrading your plan depending on your data processing.

Do you mind sharing your flow here? That would make it much easier for me to debug it.

Your are interacting with apis, perhaps there is an issue with the api, or it returns more data than it did before.

I am currently using three different APIs, retrieving a large number of items from each (500, 3200, and 3200). I then remove unnecessary data—the raw data is about 10 MB, but I filter it heavily in the following node to avoid overloading the workflow. After that, two AI agents search through the data, which also involves filtering out irrelevant information.

This behavior seems strange because the workflow was fully functional just a few days ago, even with a larger dataset and on the same n8n plan.

I checked about the API thing but it works the same, I even tried to lower the item numbers to only 400 from 6900, and it passed to the sub workflow which worked but then the workflow crashed again.

503 on n8n Cloud usually means the execution hit the memory or time limit for your plan tier — the platform is essentially saying “this job is too heavy for this resource bucket.”

Given your setup (500 + 3200 + 3200 = ~7000 items, two AI agents, 10 MB raw data), a few things to try before upgrading:

  • Batch earlier in the flow: Use the SplitInBatches node to process in chunks of 100-200 items instead of all at once. This dramatically reduces peak memory usage.
  • Filter at the API level: If your APIs support query filters, date ranges, or field selection, use them. Reducing payload size before it enters n8n is more efficient than filtering inside the workflow.
  • Check AI agent call volume: If both agents make one LLM call per item, that’s potentially 7000 API calls per run — which can also trigger rate limits or timeouts upstream.

The fact that it worked with larger datasets recently but now fails at smaller counts is interesting. It might be that one API is returning larger payloads per item now (more fields, bigger content), even with similar item counts. Worth logging the raw byte size of each API response to confirm.

That’s a ton of data you are fetching for the plan you are one. Instead of fetching all data and then cleaning up in n8n just use filters and only fetch the data you actually need. Happy to help you figure that out. Can you share the api and what data you need?

@fredfrom exactly — filtering at the source is the right call. @Yogev_Meckler if you can share which APIs you’re querying and what data you actually need, the filter params are usually straightforward to configure. Most APIs have date range filters, field selectors, or status filters that can cut 80–90% of the payload before it even hits n8n.

1 Like