Im doing a Workflow for a Medical Clinic (20k msgs/mo) using n8n v2.2.3 Queue Mode + Chatwoot + NocoDB

Hi n8n Community.

I’m currently designing a robust infrastructure for a Medical Clinic client in Latam. They handle approx 4,500 WhatsApp consultations per week (mostly audio and text) and rely heavily on phone calls, which causes a huge drop-off.

My goal is to centralize everything into a WhatsApp-first experience using a Self-Hosted “All-in-One” Stack. I wanted to share my architecture to give back to the community and ask for your feedback/roast on potential bottlenecks, especially regarding the new v2 architecture.

# The Use Case

Volume: ~20,000 incoming messages/month.
Peaks: Monday mornings are brutal (burst traffic).
Constraint: Client budget allows for high-end hardware but prefers open-source software over expensive per-seat SaaS CRMs.
Goal: 100% Uptime, Zero message loss during bursts.

# The Infrastructure

I’m deploying this on a 32GB RAM / 8 vCPU / NVMe VPS using Docker Compose.

#The Stack:

n8n (Queue Mode): Handles logic, AI routing, and API connections.
Chatwoot: For human agents to handle complex cases (handover).
NocoDB: Acts as a visual CRM/Database for the clinic directors to view patients and appointments (connected to the Postgres DB).
Shared Resources: Single Postgres 16 (with separate users/DBs for each service) and Redis instance to maximize RAM usage efficiency.
Traefik: Reverse proxy handling SSL for all subdomains.

# The Redis Logic:

To handle the WhatsApp burst traffic without overwhelming the AI Agent or creating race conditions, I’m implementing a serialized lock system in n8n:

Webhook: Receives message → Respond Immediately.
Redis Push: Pushes the payload to a list inbox:{phone_number}.
Redis Lock: Attempts to set a key lock:{phone_number} with NX (Not Exists).
If Locked: Workflow stops (the active worker will pick up the new message from the list).
If Unlocked: Worker starts → Loops through the Redis List → Processes AI Agent → Checks List again → Unlocks when empty.

# Hardware Specs

Provider: Hostinger KVM 8

RAM: 32 GB (We want to cache everything in Postgres/Redis).

Storage: 400 GB NVMe.

# My Questions for the Experts:

Logging: With 20k executions, I’m worried about disk space. I plan to set EXECUTIONS_DATA_SAVE_ON_SUCCESS=none. Is there any other “hidden” log killer I should know about in Docker?

Shared Postgres: Is it safe to share the same Postgres container for n8n, Chatwoot, and NocoDB (using different DB names/users), or should Chatwoot strictly have its own isolated container?

n8n v2 Architecture: Since I’m deploying v2.2.3, do I need to add a dedicated n8n-task-runners container to my Docker stack for this volume? Or is the standard Worker container sufficient to handle Code Node isolation efficiently?

Thanks for reading! I’ll update this thread with my results once we go live next week.

n8n version: 2.2.3 (Latest Stable)
Database (PostgreSQL): Postgres 16
n8n EXECUTIONS_PROCESS setting: queue
Running n8n via: Docker Compose
Operating system: Ubuntu 22.04 LTS

1 Like

Really cool setup! I’m actually implementing something similar for social media automation using the KVM 4 plan (16GB RAM, 4 vCPU). Same stack basically - n8n queue mode + Redis for handling burst traffic.

Few things I learned that might help:

Logging: Yeah, EXECUTIONS_DATA_SAVE_ON_SUCCESS=none is the move. Also set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE=168 (7 days). Docker logs can get huge too, so add this to your compose file:

logging:
  driver: "json-file"
  options:
    max-size: "10m"
    max-file: "3"

Shared Postgres: I’m running n8n, NocoDB and a few other services on the same Postgres instance with different databases. Works perfectly fine. Just make sure you set good connection limits for each service so they don’t starve each other. Chatwoot is fine sharing too.

Task runners: For 20k messages/month you don’t need a separate task-runner container. The standard worker handles it fine. I’d only add task-runners if you’re hitting 100k+ executions or running super heavy code nodes.

Your Redis lock strategy for WhatsApp is smart. I’m doing something similar for Instagram/Facebook APIs to avoid rate limits.

Let me know how the KVM 8 performs under load - I might upgrade if my client volume doubles!

1 Like

Thanks for the reply! It’s great to read that your setup is running good.

Really appreciate the heads-up on the logging and task runners. I honestly hadn’t thought about the Docker log rotation, so that tip is a life-saver. I’ll definitely bump up the Postgres connections too, just to be safe with the 32GB RAM.

Thanks for clearing up my doubts!