Performance issue

Describe the problem/error/question

We have a production and test stand - identical droplets on digital ocean.
On production n8n version 0.233.1, on test - 1.11.1

We run the same script synchronously, on production it runs in 20 minutes, and on test it crashes after a few hours with a memory full error.

Monitoring of the servers showed that on the production server the load on the processor is low, while on the test server the processor is constantly loaded at 100% when executing the script.

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • **n8n version: production 0.233.1, on test - 1.11.1
  • **Database (default: SQLite): latest postgres
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):main
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):docker compose
  • **Operating system:Ubuntu LTS with docker

Production YML:
version: ‘2’
services:
n8n:
image: n8nio/n8n:latest

networks:

- n8n_network

environment:
  EXECUTIONS_PROCESS: main
  N8N_BASIC_AUTH_PASSWORD: pass
  N8N_BASIC_AUTH_USER: n8n
  WEBHOOK_URL: https://our.url
  DB_TYPE: postgresdb
  DB_POSTGRESDB_DATABASE: postgres
  DB_POSTGRESDB_HOST: postgres
  DB_POSTGRESDB_PORT: '5432'
  DB_POSTGRESDB_USER: postgresuser
  DB_POSTGRESDB_PASSWORD: pass
  DB_POSTGRESDB_SCHEMA: public
  NODE_OPTIONS: --max-old-space-size=8000
  NODE_FUNCTION_ALLOW_EXTERNAL: moment
volumes:
  - n8n:/home/node/.n8n
ports:
  - 5678:5678/tcp

postgres:
image: postgres:14.1-alpine
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: postgresuser
volumes:
- db_data:/var/lib/postgresql/data
ports:
- 5432:5432/tcp

nginx-proxy:
image: nginx

networks:

- n8n_network

ports:
  - 80:80
volumes:
  - ./nginx.conf:/etc/nginx/nginx.conf:ro

networks:
default:
driver: bridge

volumes:
n8n:
driver: local
db_data:

Dev YML:
version: ‘3.8’

volumes:
db_storage:
redis_storage:
n8n_data:
nginx_data:

services:

n8n-redis:
image: redis
restart: always
container_name: n8n-redis
volumes:
- redis_storage:/data
ports:
- “6379:6379”
networks:
- n8n
n8n-postgres:
image: postgres
restart: always
container_name: n8n-postgres
environment:
- POSTGRES_PASSWORD=pass
volumes:
- db_storage:/var/lib/postgresql/data
ports:
- “5432:5432”
networks:
- n8n

n8n:
image: docker.n8n.io/n8nio/n8n
restart: always
container_name: n8n
ports:
- “5678:5678”
volumes:
- n8n_data:/home/node/.n8n
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=n8n-postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=postgres
- DB_POSTGRESDB_USER=postgres
- DB_POSTGRESDB_PASSWORD=pass

- EXECUTIONS_MODE=queue

  - QUEUE_BULL_REDIS_HOST=n8n-redis
  - QUEUE_HEALTH_CHECK_ACTIVE=true
  - WEBHOOK_URL=https://oursite
  - N8N_ENCRYPTION_KEY=somekey
  - GENERIC_TIMEZONE=Europe/Moscow

- N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true

  - EXECUTIONS_DATA_SAVE_ON_ERROR=all
  - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
  - EXECUTIONS_DATA_SAVE_ON_PROGRESS=true
  - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false
  - EXECUTIONS_DATA_PRUNE=true
  - EXECUTIONS_DATA_MAX_AGE=30
  - EXECUTIONS_DATA_PRUNE_MAX_COUNT=5000
  - NODE_OPTIONS=--max-old-space-size=8000
  
networks:
  - n8n
  - nginx_proxy_manager_default

networks:
n8n:

Hi @Ivan_Balashov, I am very sorry you’re having trouble.

Monitoring of the servers showed that on the production server the load on the processor is low, while on the test server the processor is constantly loaded at 100% when executing the script.

If I understand you correctly, the only difference between your prod and your test setup is the n8n version (0.233.1 v 1.11.1), right? Can you narrow down which service is causing the CPU load? Is this n8n itself or possibly your database? Seeing you’re using docker, running docker stats should display this information for example.

1 Like

Yes, i used docker stats, n8n container load CPU on the server (not database)

Are you running the execution on the test instance manually through the frontend ?
I’ve noticed significantly higher memory utilisation on manual executions, and I’m wondering if that’s what might be causing this issue.

1 Like

No, it was planned start every time - bu schedule trigger

I remove the redis database and performance improved much