Atleast more than one replica for the n8n pod in openshift

Describe the problem/error/question

We have a situation where we use n8n pod with single replica and also it is connected to a postgres db pod with single replica

What is the error message (if any)?

when i tried to scale up the pod to have 2 pods ( or 2 replicas) it showed multi-volume attach error.

we use 3 pvc’s which are of rwo type that is read write once.

so what i did is i created 3 new pvcs and mounted it and these new pvcs are of rwx type which means read write many and now i can scale it up to multiple instance where the db pod still remains same with one replica.

please guide me or let me know whether this replication cause any impact to the pod or to the db

will there be any multiple write operations. or any database corruptions due to this config

we got a suggestion from gpt to enable connection to a standalone redis port through the environmental variables and will this be sufficient for smooth working.

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.109.2
  • Database (default: SQLite): postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n Cloud deployment running on OpenShift
  • Operating system: Linux (containerized environment on OpenShift)

Hi @krishna.athul

What you did fixed the storage error, but it doesn’t make this a safe horizontal scale of n8n.

In your current setup (EXECUTIONS_PROCESS=own), each replica acts as a full “main” instance. If you run two replicas like this, both will try to execute triggers, schedules and cleanup jobs at the same time. Postgres itself won’t get corrupted (it supports concurrent writes), but you can get duplicated executions, race conditions and unpredictable behaviour because this mode is only designed for a single main instance.

Just adding Redis via environment variables is not enough. To safely run more than one n8n pod you need to switch to queue mode (EXECUTIONS_MODE=queue) and use the supported architecture: one main instance plus multiple worker pods sharing Postgres and Redis (or multi-main in queue mode with the proper configuration and load balancer). If you want to stay in the current “own” mode, the recommended and safe approach is to keep only one n8n replica.

1 Like

Hi Tamy
thank you for the quick response
so as per your suggestion there would be 1 main pod and 2 or maybe 3 helper pods

actually our main intention is to not let the n8n app down whenever there is an openshift maintanence and as part of this some nodes restart as so the pods associated with it restarts and maybe get stuck in crearing state for maybe 10 or 20 mins so to avoid this is what our actual try.

as per your suggestion correct me if im wrong but if this main pod goes down then the app will not work so apparently this is also not feasible or is there any way around this.

please suggest.

Regards,

Athul

1 Like

Hi @krishna.athul

From the docs and forum content, there are only two supported ways to avoid this single‑pod SPOF:

The first is switching to queue mode with one main instance and multiple workers. This change improves scaling and execution resilience, but not high availability of the core application. [Queue mode; Configuring workers]

The second move is running multiple main instances in queue mode. This is what removes the single point of failure of the main process and provides true high availability. [Multi-main setup; K8s sticky sessions] [Queue env vars]

Atention:

  • You cannot safely achieve HA by running multiple pods in own mode.
  • To reduce impact of node restarts: Move to queue mode and add workers > app stays usable when workers restart, but not when the single main restarts.
  • To keep the app up even when one main pod is restarted or stuck:

The knowledge sources don’t say whether your specific “n8n Cloud deployment on OpenShift” supports enabling queue mode + workers or multi‑main, so I can’t confirm that part. I’d suggest:

  1. Ask n8n support if your plan/environment allows:
  • EXECUTIONS_MODE=queue + workers.

  • N8N_MULTI_MAIN_SETUP_ENABLED=true (multi‑main).

  1. If only queue mode is available:
  • Use 1 main + N workers to at least make worker restarts harmless.
  1. If multi‑main is available:
  • Run multiple mains in queue mode behind a sticky‑session LB + workers to survive OpenShift node maintenance without full app downtime.

Hope this helps!

Dear all

https://github.com/advisories/GHSA-5xrp-6693-jjx9

We are using n8n 1.109.2 version and could you check whether it is impacted

Should we move to a newer version

Please suggest a stable version also if possible

@krishna.athul

he current recommended stable version of n8n is 2.2.4

Hello tamy

2.2.4 also satisfies the github vulnerabiltiy version criteria

We actually tested 1.123.17 and seems working

However is it ok if we upgrade to 2.4.6 version

Could you please suggest if possible

Thanks anywayd for your fast replies

Wow, @krishna.athul , thank you very much for pointing this out, I didn’t know about this.
I did some research here and use version 2.4.6 or 2.6.3, which is the latest stable version as of February 2026. Be careful when upgrading, make a backup, test it first in the staging environment, and check whether your OpenShift deployment is using stable images.

Hello tamy

We are planning to test by upgrading to the suggested version 2.2.4 maybe tomorrow and will let know of the aftermath.

For safety we would take the db backup .

As we tried earlier there is no scope to rollback to previous version right after upgrade rather than db backup

We would also like to ask about how to see the metrics using prometheus in grafana maybe how to see max details regarding the application

Thank you always for the fast response.

Regards,

Athul

Hi @krishna.athul

That sounds like a good plan.

For metrics, n8n can expose a Prometheus endpoint that you can scrape and visualize in Grafana. The basic setup is to enable the metrics endpoint in n8n, point Prometheus at it, and then create or import dashboards in Grafana. The official docs have a guide on this that’s worth checking out as a starting point.

And yeah, definitely test everything in staging first, especially when you’re dealing with upgrades and scaling changes.

Let us know how the upgrade goes, it’d be helpful for others working through similar setups.

Regards :slight_smile:

Dear tamy,

We did the upgrade on infra side and seems ok. The workflow team would check on the workflows.

Regarding the metrics

I have added a metrics tcp on port 1234 to the service yaml of the pod

Along with n8n metrics env variable set to true , n8n metrics port to 1234, n8n metrics allow external to true but cannot access thw metrics endpoint