Self-hosted n8n on Hostinger VPS – frequent 502 / Bad Gateway errors, server hangs during workflow execution

Hi everyone,

I’m running a self-hosted n8n instance on a Hostinger VPS (Ubuntu 24.04) and I’m experiencing frequent issues where the server becomes temporarily unresponsive during workflow execution.

Sometimes the workflow stops and I receive either:

  • “502 Bad Gateway”

  • “Request failed with status code 502”

  • or the n8n editor becomes unresponsive for a short time

After a while it usually works again, but it happens quite regularly.

Server setup:

  • Hostinger VPS

  • Ubuntu 24.04

  • n8n self-hosted

  • Running via Docker

  • 4 vCPU

  • ~8 GB RAM

  • 200 GB disk

According to the server monitoring:

  • CPU usage usually stays around 3–15%

  • RAM usage around 1–4 GB

  • Disk and bandwidth are far from limits

So it doesn’t seem like the server is overloaded.

The issue tends to appear when workflows process larger numbers of items.

My questions:

  1. What is the most common cause of these 502 / Bad Gateway errors with self-hosted n8n?

  2. Could this be related to reverse proxy timeouts or Docker configuration?

  3. Could it be caused by large responses in the editor?

  4. Does the database type matter here (MySQL vs PostgreSQL vs SQLite)?

  5. Are there recommended production settings to make n8n more stable?

Would appreciate any guidance on what to check or optimize.

Thanks a lot!

502 errors with self-hosted n8n are almost always one of three things, and your setup narrows it down quickly.

1. Nginx/Caddy proxy timeout (most likely culprit)

When workflows process large numbers of items, they can run longer than your reverse proxy’s default timeout. The proxy gives up and returns 502 before n8n finishes. Default proxy_read_timeout in nginx is 60s — longer workflows hit this.

Fix:

proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;

If you’re using Caddy, add timeouts to your reverse_proxy block.

2. Docker memory limits or OOM kills

Even if overall RAM is fine (1-4 GB used), Docker containers can have per-container memory limits set. When n8n’s Node.js process spikes during a large batch, it can hit the container limit, get OOM-killed, and return 502 until it restarts.

Check with:

docker stats n8n
docker inspect n8n | grep -i memory

If you’re not intentionally limiting memory, you might still have a default compose config limiting it.

3. n8n webhook/execution timeout settings

n8n has its own internal execution timeouts. For workflows processing large item sets, you may be hitting N8N_EXECUTIONS_TIMEOUT (default: no timeout) but more commonly the webhook response timeout.

Check if the 502 happens at a consistent time (e.g., always around 30s or 60s) — that’s a strong signal it’s a proxy timeout.

What’s your reverse proxy setup — nginx, Caddy, or something else? And does the 502 happen at a roughly consistent time into execution, or is it random?

Hi Welcome! this is almost certainly your reverse proxy timing out, when workflows run longer than 60s nginx just kills the connection and throws a 502. Set proxy_read_timeout 300s; and proxy_send_timeout 300s; in your nginx config, also make sure you have the websocket upgrade headers set otherwise the editor will keep freezing on you. If youre using SQLite consider switching to PostgreSQL too as SQLite doesnt handle concurrent load well

Bro to save you headache I’ll advice you just switch to Elestio, after the Meta and Hostinger issue I switched and I don’t have any issues with my n8n

1 Like

Nice! I was wondering td how it went for everyone in that thread, I haven’t checked it in a while! Idk why hostinger is promoted by so many ppl it isnt that peak.

1 Like

Thanks for your replies!
I would need a provider who has a german server.

I just asked elest.io if they are offering it.
Do you have any other recommendations for provider which are not having these issues I mentioned?

Thank you guys

1 Like

Yeah man I’ve been wondering about the same thing. I followed a lot of Youtubers going into Hostinger, glad it showed me it’s true colours when I just had 2 clients in my network, I just imagine if I scaled to 10+ and got hit with this issue :joy:

1 Like

I moved to Elestio and don’t have headaches anymore. I use netcup as the provider and yes the have a german server.

1 Like

Healthy CPU and RAM but still getting 502s on large item counts points to a few things.

First, check your Nginx proxy timeout. Default is 60s and long workflows hit it. Add proxy_read_timeout 300s; proxy_send_timeout 300s; to your config.

If that doesn’t fix it, SQLite is likely the culprit. Concurrent workflow executions cause write lock contention, which hangs n8n and triggers 502s upstream. Migrating to Postgres usually solves this completely.

The “hangs on larger item counts” is also a sign of event loop blocking in main mode. Setting EXECUTIONS_MODE=queue with Redis moves execution off the main process and keeps the UI responsive. Upstash Redis free tier is enough to start.