How to optimize n8n Self-Hosting for Scalability and Concurrent Request Handling?

I am self-hosting n8n, and my chatbot is experiencing increasing traffic. I want to optimize my setup to better handle concurrent requests and ensure scalability. While I haven’t faced significant issues yet, I’d like to configure the instance more robustly for future growth. What settings or adjustments can I make to improve parallel request handling and overall performance?

I used the basic N8N setup:

sudo docker run -d --restart unless-stopped -it \
  --name n8n \
  -p 5678:5678 \
  -e N8N_HOST="your-subdomain.your-domain.com" \
  -e WEBHOOK_TUNNEL_URL="https://your-subdomain.your-domain.com/" \
  -e WEBHOOK_URL="https://your-subdomain.your-domain.com/" \
  -e N8N_ENABLE_RAW_EXECUTION="true" \
  -e NODE_FUNCTION_ALLOW_BUILTIN="crypto" \
  -e NODE_FUNCTION_ALLOW_EXTERNAL="" \
  -e N8N_PUSH_BACKEND=websocket \
  #-e N8N_DEFAULT_BINARY_DATA_MODE="filesystem" \   # Needed when e.g. trying to upload Youtube Videos
  -v /home/your-google-account/.n8n:/home/node/.n8n \
  n8nio/n8n

Information on your n8n setup

  • n8n version: 1.88
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker, gcp
  • Operating system: Windows 10

Hi,

I think it’s already very well explained over here:

There is only so much you can do on a single instance. Scaling horizontally usually is also limited.

reg,
J.