Intermittent 503 Error: Workflow Initialization and Execution Issues

Describe the problem/error/question

What is the error message (if any)?

Hello everyone, how are you? I’m using N8N with the “Starter” plan.

The issue is that when I try to run tests or tests with my workflows, I get the following errors:

  • “Init Problem - There was a problem loading init data: Request failed with status code 503”
  • “Problem running workflow: Request failed with status code 503”

This resolves itself after a few minutes, apparently the workspace restarts, but after a few more minutes it happens again.

Do you know what could be causing this? It’s holding up several developments and commitments with clients :frowning:

Greetings to all

Please share your workflow


Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.50.1
  • Database (default: SQLite): Default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Degault
  • Running n8n via: Cloud
  • Operating system: Apple (Chromium Browser)

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

This looks like you are hitting memory limits, possibly due to the size of your execution data. Please contact support with your cloud account name.

@netroy Thank you very much for your response, I have already contacted the support team.

I’ll still ask you a question:

Are these types of problems common in Cloud? Since at no point is it mentioned that you have to take into account the memory or size of the executions.

I think out-of-memory issues aren’t that common anymore, because most people on the starter plan often don’t have that much data in their executions.

However one scenario where they do happen is when it’s a manual execution and there is a lot of data, because all that data is currently buffered in memory to be sent back to the client over websockets. This can often increase the memory usage significantly.

You could try to activate the workflow, and try executing it via the production webhook url. If you still see the crash, we can try to debug this together.

Not saying that you should not use manual executions, just that 320MB of memory on the starter plan can often be too low for some workflows.
In cases like these, people can either break up the workflow into sub-workflows, or upgrade the plan to a higher memory tier.

Hi @integrations_xtract

Just for reference, our Cloud Plan limits are as follows:

  • Start: 320mb RAM, 10 millicore CPU burstable
  • Pro-1: 640mb RAM, 20 millicore CPU burstable
  • Pro-2: 1280mb RAM, 80 millicore CPU burstable
1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.