New selfhosted n8n instance is very slow with 1 basic workflow

Describe the problem/error/question

I installed n8n about a week ago to try it out. But I still not have used it properly because it is very slow. Just the login screen takes 20-30 seconds. Then trying to open a workflow also takes about 30 seconds. The strange thing is, I have dedicated enough resources to it (even overkilled it, see section below information on n8n setup)

What is the error message (if any)?

➜ n8n docker logs -f n8n-n8n-1
Initializing n8n process
n8n ready on ::, port 5678
n8n Task Broker ready on 127.0.0.1, port 5679
[license SDK] Skipping renewal on init: license cert is not due for renewal
Registered runner “JS Task Runner” (SZfoeuUW7xq_1srL4LMeq)
Version: 1.119.1

Editor is now accessible via:
https://n8n.hantar.be
(node:8) [DEP0060] DeprecationWarning: The util._extend API is deprecated. Please use Object.assign() instead.
(Use node --trace-deprecation ... to show where the warning was created)
Received request for unknown webhook: The requested webhook “aec368e8-c61f-43fd-81f1-df10f7cd3633” is not registered.
Received request for unknown webhook: The requested webhook “aec368e8-c61f-43fd-81f1-df10f7cd3633” is not registered.
Received request for unknown webhook: The requested webhook “aec368e8-c61f-43fd-81f1-df10f7cd3633” is not registered.
Blocked GET /robots.txt for “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36; compatible; OAI-SearchBot/1.3; robots.txt; +https://openai.com/searchbot”
[license SDK] attempting license renewal
[license SDK] license successfully renewed
Pruning old insights data
[license SDK] license successfully renewed
[license SDK] license successfully renewed
Pruning old insights data
Received request for unknown webhook: The requested webhook “sollicitatie-formulier” is not registered.
timeout of 3000ms exceeded
Error while fetching community nodes: timeout of 3000ms exceeded
Only running or waiting executions can be stopped and 86 is currently success

The workflow I have only has 2 nodes in it so it is not heavy at all.

Information on your n8n setup

I am running proxmox on an NUC8I7BEH2. I am running 2 LXC

LXC 1: Traefik so I can use n8n with cloudflare by going to n8n.hantar.be. Traefik is running bare metal on this LXC.

For completeness sake I will add the config of traefik but I don’t think that is the issue because I can reach my domain perfectly fine.

root@traefik:/etc/traefik/conf.d# cat n8n.yml 
http:
  routers:
    n8n:
      rule: "Host(`n8n.hantar.be`)"
      entryPoints:
        - websecure
      tls:
        certResolver: letsencrypt
      service: n8n_service

  services:
    n8n_service:
      loadBalancer:
        servers:
          - url: "http://192.168.1.88:5678" 

LXC 2: Here I am running n8n on docker with Postgres DB. I gave this LXC 2gb ram, 2 cores cpu and 32GB SSD. I also gave it a static IP address (192.168.1.88)

This is my docker-compose file:

➜  n8n cat docker-compose.yml
version: '3.7'

services:
  db:
    image: postgres:14
    restart: always
    environment:
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=MYPRIVETEPASS
      - POSTGRES_DB=n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data

  n8n:
    image: n8nio/n8n
    restart: always
    ports:
     - "5678:5678"
    environment:
      - N8N_PROXY_HOPS=1
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=db
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=MYPRIVETEPASS
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=MYPRIVETEPASS
      - N8N_HOST=n8n.hantar.be
      - WEBHOOK_URL=https://n8n.hantar.be/
      - N8N_EXPRESS_TRUST_PROXY=true
      - N8N_EXPRESS_TRUSTED_PROXIES=*
      - N8N_SECURE_COOKIE=false
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
      - N8N_RUNNERS_ENABLED=true
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=false
      - N8N_GIT_NODE_DISABLE_BARE_REPOS=true
      - N8N_SECURE_COOKIE=false
      - N8N_PROTOCOL=https
      - N8N_PORT=5678
    depends_on:
      - db
    volumes:
      - n8n_data:/home/node/.n8n

volumes:
  postgres_data:
  n8n_data:

I do see in the network tab that a lot of requests are being cancelled or returning 520

  • n8n version: 1.119.1
  • Database (default: SQLite): in my docker-compose you can see I am using postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Operating system: both LXC are running debian OS

FYI: I just updated to the latest version, but that is still not solving the problem I am having. Is it due to using postgres instead of the default one?

Hi @hantar, welcome!

Based on your setup:

First impression is that using an HDD isn’t the best practice according to the n8n prerequisites, However it should still work not sure..

Another thing that can help is enabling Prometheus metrics

N8N_METRICS=true

then access it at:

https://<Your_instance>/metrics

This will give you useful metrics that can help you figure out where the bottleneck is

example
# HELP n8n_process_cpu_user_seconds_total Total user CPU time spent in seconds.

# TYPE n8n_process_cpu_user_seconds_total counter

n8n_process_cpu_user_seconds_total 1.29099

# HELP n8n_process_cpu_system_seconds_total Total system CPU time spent in seconds.

# TYPE n8n_process_cpu_system_seconds_total counter

n8n_process_cpu_system_seconds_total 0.176783

# HELP n8n_process_cpu_seconds_total Total user and system CPU time spent in seconds.

# TYPE n8n_process_cpu_seconds_total counter

n8n_process_cpu_seconds_total 1.467773

# HELP n8n_process_start_time_seconds Start time of the process since unix epoch in seconds.

# TYPE n8n_process_start_time_seconds gauge

n8n_process_start_time_seconds 1764248556

# HELP n8n_process_resident_memory_bytes Resident memory size in bytes.

# TYPE n8n_process_resident_memory_bytes gauge

n8n_process_resident_memory_bytes 549941248

# HELP n8n_process_virtual_memory_bytes Virtual memory size in bytes.

# TYPE n8n_process_virtual_memory_bytes gauge

n8n_process_virtual_memory_bytes 22694395904

# HELP n8n_process_heap_bytes Process heap size in bytes.

# TYPE n8n_process_heap_bytes gauge

n8n_process_heap_bytes 592084992

# HELP n8n_process_open_fds Number of open file descriptors.

# TYPE n8n_process_open_fds gauge

n8n_process_open_fds 36

# HELP n8n_process_max_fds Maximum number of open file descriptors.

# TYPE n8n_process_max_fds gauge

n8n_process_max_fds 1048576

# HELP n8n_nodejs_eventloop_lag_seconds Lag of event loop in seconds.

# TYPE n8n_nodejs_eventloop_lag_seconds gauge

n8n_nodejs_eventloop_lag_seconds 0

# HELP n8n_nodejs_eventloop_lag_min_seconds The minimum recorded event loop delay.

# TYPE n8n_nodejs_eventloop_lag_min_seconds gauge

n8n_nodejs_eventloop_lag_min_seconds 0.009216

# HELP n8n_nodejs_eventloop_lag_max_seconds The maximum recorded event loop delay.

# TYPE n8n_nodejs_eventloop_lag_max_seconds gauge

n8n_nodejs_eventloop_lag_max_seconds 0.342884351

# HELP n8n_nodejs_eventloop_lag_mean_seconds The mean of the recorded event loop delays.

# TYPE n8n_nodejs_eventloop_lag_mean_seconds gauge

n8n_nodejs_eventloop_lag_mean_seconds 0.0110487490472103

# HELP n8n_nodejs_eventloop_lag_stddev_seconds The standard deviation of the recorded event loop delays.

# TYPE n8n_nodejs_eventloop_lag_stddev_seconds gauge

n8n_nodejs_eventloop_lag_stddev_seconds 0.013618980924907835

# HELP n8n_nodejs_eventloop_lag_p50_seconds The 50th percentile of the recorded event loop delays.

# TYPE n8n_nodejs_eventloop_lag_p50_seconds gauge

n8n_nodejs_eventloop_lag_p50_seconds 0.010084351

# HELP n8n_nodejs_eventloop_lag_p90_seconds The 90th percentile of the recorded event loop delays.

# TYPE n8n_nodejs_eventloop_lag_p90_seconds gauge

n8n_nodejs_eventloop_lag_p90_seconds 0.010108927

# HELP n8n_nodejs_eventloop_lag_p99_seconds The 99th percentile of the recorded event loop delays.

# TYPE n8n_nodejs_eventloop_lag_p99_seconds gauge

n8n_nodejs_eventloop_lag_p99_seconds 0.015097855

# HELP n8n_nodejs_active_resources Number of active resources that are currently keeping the event loop alive, grouped by async resource type.

# TYPE n8n_nodejs_active_resources gauge

n8n_nodejs_active_resources{type=“PipeWrap”} 5
n8n_nodejs_active_resources{type=“TCPServerWrap”} 2
n8n_nodejs_active_resources{type=“ProcessWrap”} 1
n8n_nodejs_active_resources{type=“TCPSocketWrap”} 2
n8n_nodejs_active_resources{type=“Timeout”} 16
n8n_nodejs_active_resources{type=“Immediate”} 1

# HELP n8n_nodejs_active_resources_total Total number of active resources.

# TYPE n8n_nodejs_active_resources_total gauge

n8n_nodejs_active_resources_total 27

# HELP n8n_nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name.

# TYPE n8n_nodejs_active_handles gauge

n8n_nodejs_active_handles{type=“Socket”} 7
n8n_nodejs_active_handles{type=“Server”} 2
n8n_nodejs_active_handles{type=“ChildProcess”} 1

# HELP n8n_nodejs_active_handles_total Total number of active handles.

# TYPE n8n_nodejs_active_handles_total gauge

n8n_nodejs_active_handles_total 10

# HELP n8n_nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name.

# TYPE n8n_nodejs_active_requests gauge

# HELP n8n_nodejs_active_requests_total Total number of active requests.

# TYPE n8n_nodejs_active_requests_total gauge

n8n_nodejs_active_requests_total 0

# HELP n8n_nodejs_heap_size_total_bytes Process heap size from Node.js in bytes.

# TYPE n8n_nodejs_heap_size_total_bytes gauge

n8n_nodejs_heap_size_total_bytes 352194560

# HELP n8n_nodejs_heap_size_used_bytes Process heap size used from Node.js in bytes.

# TYPE n8n_nodejs_heap_size_used_bytes gauge

n8n_nodejs_heap_size_used_bytes 280274544

# HELP n8n_nodejs_external_memory_bytes Node.js external memory size in bytes.

# TYPE n8n_nodejs_external_memory_bytes gauge

n8n_nodejs_external_memory_bytes 47703917

# HELP n8n_nodejs_heap_space_size_total_bytes Process heap space size total from Node.js in bytes.

# TYPE n8n_nodejs_heap_space_size_total_bytes gauge

n8n_nodejs_heap_space_size_total_bytes{space=“read_only”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“new”} 33554432
n8n_nodejs_heap_space_size_total_bytes{space=“old”} 181940224
n8n_nodejs_heap_space_size_total_bytes{space=“code”} 5505024
n8n_nodejs_heap_space_size_total_bytes{space=“shared”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“trusted”} 9539584
n8n_nodejs_heap_space_size_total_bytes{space=“new_large_object”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“large_object”} 121483264
n8n_nodejs_heap_space_size_total_bytes{space=“code_large_object”} 172032
n8n_nodejs_heap_space_size_total_bytes{space=“shared_large_object”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“trusted_large_object”} 0

# HELP n8n_nodejs_heap_space_size_used_bytes Process heap space size used from Node.js in bytes.

# TYPE n8n_nodejs_heap_space_size_used_bytes gauge

n8n_nodejs_heap_space_size_used_bytes{space=“read_only”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“new”} 3552344
n8n_nodejs_heap_space_size_used_bytes{space=“old”} 148173512
n8n_nodejs_heap_space_size_used_bytes{space=“code”} 5046912
n8n_nodejs_heap_space_size_used_bytes{space=“shared”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“trusted”} 8725520
n8n_nodejs_heap_space_size_used_bytes{space=“new_large_object”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“large_object”} 114628920
n8n_nodejs_heap_space_size_used_bytes{space=“code_large_object”} 155328
n8n_nodejs_heap_space_size_used_bytes{space=“shared_large_object”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“trusted_large_object”} 0

# HELP n8n_nodejs_heap_space_size_available_bytes Process heap space size available from Node.js in bytes.

# TYPE n8n_nodejs_heap_space_size_available_bytes gauge

n8n_nodejs_heap_space_size_available_bytes{space=“read_only”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“new”} 12942248
n8n_nodejs_heap_space_size_available_bytes{space=“old”} 30419456
n8n_nodejs_heap_space_size_available_bytes{space=“code”} 114048
n8n_nodejs_heap_space_size_available_bytes{space=“shared”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“trusted”} 650128
n8n_nodejs_heap_space_size_available_bytes{space=“new_large_object”} 16777216
n8n_nodejs_heap_space_size_available_bytes{space=“large_object”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“code_large_object”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“shared_large_object”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“trusted_large_object”} 0

# HELP n8n_nodejs_version_info Node.js version info.

# TYPE n8n_nodejs_version_info gauge

n8n_nodejs_version_info{version=“v22.21.0”,major=“22”,minor=“21”,patch=“0”} 1

# HELP n8n_nodejs_gc_duration_seconds Garbage collection duration by kind, one of major, minor, incremental or weakcb.

# TYPE n8n_nodejs_gc_duration_seconds histogram

n8n_nodejs_gc_duration_seconds_bucket{le=“0.001”,kind=“minor”} 0
n8n_nodejs_gc_duration_seconds_bucket{le=“0.01”,kind=“minor”} 6
n8n_nodejs_gc_duration_seconds_bucket{le=“0.1”,kind=“minor”} 6
n8n_nodejs_gc_duration_seconds_bucket{le=“1”,kind=“minor”} 6
n8n_nodejs_gc_duration_seconds_bucket{le=“2”,kind=“minor”} 6
n8n_nodejs_gc_duration_seconds_bucket{le=“5”,kind=“minor”} 6
n8n_nodejs_gc_duration_seconds_bucket{le=“+Inf”,kind=“minor”} 6
n8n_nodejs_gc_duration_seconds_sum{kind=“minor”} 0.024407263040542602
n8n_nodejs_gc_duration_seconds_count{kind=“minor”} 6

# HELP n8n_version_info n8n version info.

# TYPE n8n_version_info gauge

n8n_version_info{version=“v1.120.4”,major=“1”,minor=“120”,patch=“4”} 1

# HELP n8n_instance_role_leader Whether this main instance is the leader (1) or not (0).

# TYPE n8n_instance_role_leader gauge

n8n_instance_role_leader 1

# HELP n8n_active_workflow_count Total number of active workflows.

# TYPE n8n_active_workflow_count gauge

n8n_active_workflow_count 0

1 Like

Hello @mohamed3nan

Thank you very much for your reply brother!

With HDD I meant storage, but I do actually have SSD. I will update my beginners post!

I will try the metrics option and report back ASAP.

1 Like

Hi @mohamed3nan

I did enable the metrics and took a look. However the numbers aren’t telling me much to be honest :expressionless:

Summary

# HELP n8n_process_cpu_user_seconds_total Total user CPU time spent in seconds.

TYPE n8n_process_cpu_user_seconds_total counter

n8n_process_cpu_user_seconds_total 208.963506

# HELP n8n_process_cpu_system_seconds_total Total system CPU time spent in seconds.

TYPE n8n_process_cpu_system_seconds_total counter

n8n_process_cpu_system_seconds_total 92.846066

# HELP n8n_process_cpu_seconds_total Total user and system CPU time spent in seconds.

TYPE n8n_process_cpu_seconds_total counter

n8n_process_cpu_seconds_total 301.809572

# HELP n8n_process_start_time_seconds Start time of the process since unix epoch in seconds.

TYPE n8n_process_start_time_seconds gauge

n8n_process_start_time_seconds 1764250505

# HELP n8n_process_resident_memory_bytes Resident memory size in bytes.

TYPE n8n_process_resident_memory_bytes gauge

n8n_process_resident_memory_bytes 285200384

# HELP n8n_process_virtual_memory_bytes Virtual memory size in bytes.

TYPE n8n_process_virtual_memory_bytes gauge

n8n_process_virtual_memory_bytes 22955528192

# HELP n8n_process_heap_bytes Process heap size in bytes.

TYPE n8n_process_heap_bytes gauge

n8n_process_heap_bytes 318042112

# HELP n8n_process_open_fds Number of open file descriptors.

TYPE n8n_process_open_fds gauge

n8n_process_open_fds 37

# HELP n8n_process_max_fds Maximum number of open file descriptors.

TYPE n8n_process_max_fds gauge

n8n_process_max_fds 524288

# HELP n8n_nodejs_eventloop_lag_seconds Lag of event loop in seconds.

TYPE n8n_nodejs_eventloop_lag_seconds gauge

n8n_nodejs_eventloop_lag_seconds 0.001435576

# HELP n8n_nodejs_eventloop_lag_min_seconds The minimum recorded event loop delay.

TYPE n8n_nodejs_eventloop_lag_min_seconds gauge

n8n_nodejs_eventloop_lag_min_seconds 0.007548928

# HELP n8n_nodejs_eventloop_lag_max_seconds The maximum recorded event loop delay.

TYPE n8n_nodejs_eventloop_lag_max_seconds gauge

n8n_nodejs_eventloop_lag_max_seconds 0.018071551

# HELP n8n_nodejs_eventloop_lag_mean_seconds The mean of the recorded event loop delays.

TYPE n8n_nodejs_eventloop_lag_mean_seconds gauge

n8n_nodejs_eventloop_lag_mean_seconds 0.010406950538157475

# HELP n8n_nodejs_eventloop_lag_stddev_seconds The standard deviation of the recorded event loop delays.

TYPE n8n_nodejs_eventloop_lag_stddev_seconds gauge

n8n_nodejs_eventloop_lag_stddev_seconds 0.00035198723597677334

# HELP n8n_nodejs_eventloop_lag_p50_seconds The 50th percentile of the recorded event loop delays.

TYPE n8n_nodejs_eventloop_lag_p50_seconds gauge

n8n_nodejs_eventloop_lag_p50_seconds 0.010256383

# HELP n8n_nodejs_eventloop_lag_p90_seconds The 90th percentile of the recorded event loop delays.

TYPE n8n_nodejs_eventloop_lag_p90_seconds gauge

n8n_nodejs_eventloop_lag_p90_seconds 0.010813439

# HELP n8n_nodejs_eventloop_lag_p99_seconds The 99th percentile of the recorded event loop delays.

TYPE n8n_nodejs_eventloop_lag_p99_seconds gauge

n8n_nodejs_eventloop_lag_p99_seconds 0.011452415

# HELP n8n_nodejs_active_resources Number of active resources that are currently keeping the event loop alive, grouped by async resource type.

TYPE n8n_nodejs_active_resources gauge

n8n_nodejs_active_resources{type=“PipeWrap”} 5
n8n_nodejs_active_resources{type=“TCPServerWrap”} 2
n8n_nodejs_active_resources{type=“ProcessWrap”} 1
n8n_nodejs_active_resources{type=“TCPSocketWrap”} 10
n8n_nodejs_active_resources{type=“Timeout”} 18
n8n_nodejs_active_resources{type=“Immediate”} 1

# HELP n8n_nodejs_active_resources_total Total number of active resources.

TYPE n8n_nodejs_active_resources_total gauge

n8n_nodejs_active_resources_total 37

# HELP n8n_nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name.

TYPE n8n_nodejs_active_handles gauge

n8n_nodejs_active_handles{type=“Socket”} 14
n8n_nodejs_active_handles{type=“Server”} 2
n8n_nodejs_active_handles{type=“ChildProcess”} 1
n8n_nodejs_active_handles{type=“TLSSocket”} 1

# HELP n8n_nodejs_active_handles_total Total number of active handles.

TYPE n8n_nodejs_active_handles_total gauge

n8n_nodejs_active_handles_total 18

# HELP n8n_nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name.

TYPE n8n_nodejs_active_requests gauge

# HELP n8n_nodejs_active_requests_total Total number of active requests.

TYPE n8n_nodejs_active_requests_total gauge

n8n_nodejs_active_requests_total 0

# HELP n8n_nodejs_heap_size_total_bytes Process heap size from Node.js in bytes.

TYPE n8n_nodejs_heap_size_total_bytes gauge

n8n_nodejs_heap_size_total_bytes 152502272

# HELP n8n_nodejs_heap_size_used_bytes Process heap size used from Node.js in bytes.

TYPE n8n_nodejs_heap_size_used_bytes gauge

n8n_nodejs_heap_size_used_bytes 142206136

# HELP n8n_nodejs_external_memory_bytes Node.js external memory size in bytes.

TYPE n8n_nodejs_external_memory_bytes gauge

n8n_nodejs_external_memory_bytes 21908534

# HELP n8n_nodejs_heap_space_size_total_bytes Process heap space size total from Node.js in bytes.

TYPE n8n_nodejs_heap_space_size_total_bytes gauge

n8n_nodejs_heap_space_size_total_bytes{space=“read_only”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“new”} 1048576
n8n_nodejs_heap_space_size_total_bytes{space=“old”} 131907584
n8n_nodejs_heap_space_size_total_bytes{space=“code”} 7602176
n8n_nodejs_heap_space_size_total_bytes{space=“shared”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“trusted”} 4763648
n8n_nodejs_heap_space_size_total_bytes{space=“new_large_object”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“large_object”} 7024640
n8n_nodejs_heap_space_size_total_bytes{space=“code_large_object”} 155648
n8n_nodejs_heap_space_size_total_bytes{space=“shared_large_object”} 0
n8n_nodejs_heap_space_size_total_bytes{space=“trusted_large_object”} 0

# HELP n8n_nodejs_heap_space_size_used_bytes Process heap space size used from Node.js in bytes.

TYPE n8n_nodejs_heap_space_size_used_bytes gauge

n8n_nodejs_heap_space_size_used_bytes{space=“read_only”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“new”} 628760
n8n_nodejs_heap_space_size_used_bytes{space=“old”} 124270216
n8n_nodejs_heap_space_size_used_bytes{space=“code”} 6563840
n8n_nodejs_heap_space_size_used_bytes{space=“shared”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“trusted”} 3753896
n8n_nodejs_heap_space_size_used_bytes{space=“new_large_object”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“large_object”} 6857776
n8n_nodejs_heap_space_size_used_bytes{space=“code_large_object”} 138432
n8n_nodejs_heap_space_size_used_bytes{space=“shared_large_object”} 0
n8n_nodejs_heap_space_size_used_bytes{space=“trusted_large_object”} 0

# HELP n8n_nodejs_heap_space_size_available_bytes Process heap space size available from Node.js in bytes.

TYPE n8n_nodejs_heap_space_size_available_bytes gauge

n8n_nodejs_heap_space_size_available_bytes{space=“read_only”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“new”} 402152
n8n_nodejs_heap_space_size_available_bytes{space=“old”} 5127816
n8n_nodejs_heap_space_size_available_bytes{space=“code”} 562272
n8n_nodejs_heap_space_size_available_bytes{space=“shared”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“trusted”} 924624
n8n_nodejs_heap_space_size_available_bytes{space=“new_large_object”} 1048576
n8n_nodejs_heap_space_size_available_bytes{space=“large_object”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“code_large_object”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“shared_large_object”} 0
n8n_nodejs_heap_space_size_available_bytes{space=“trusted_large_object”} 0

# HELP n8n_nodejs_version_info Node.js version info.

TYPE n8n_nodejs_version_info gauge

n8n_nodejs_version_info{version=“v22.21.0”,major=“22”,minor=“21”,patch=“0”} 1

# HELP n8n_nodejs_gc_duration_seconds Garbage collection duration by kind, one of major, minor, incremental or weakcb.

TYPE n8n_nodejs_gc_duration_seconds histogram

n8n_nodejs_gc_duration_seconds_bucket{le=“0.001”,kind=“minor”} 2155
n8n_nodejs_gc_duration_seconds_bucket{le=“0.01”,kind=“minor”} 2214
n8n_nodejs_gc_duration_seconds_bucket{le=“0.1”,kind=“minor”} 2214
n8n_nodejs_gc_duration_seconds_bucket{le=“1”,kind=“minor”} 2214
n8n_nodejs_gc_duration_seconds_bucket{le=“2”,kind=“minor”} 2214
n8n_nodejs_gc_duration_seconds_bucket{le=“5”,kind=“minor”} 2214
n8n_nodejs_gc_duration_seconds_bucket{le=“+Inf”,kind=“minor”} 2214
n8n_nodejs_gc_duration_seconds_sum{kind=“minor”} 1.114504253089427
n8n_nodejs_gc_duration_seconds_count{kind=“minor”} 2214
n8n_nodejs_gc_duration_seconds_bucket{le=“0.001”,kind=“incremental”} 11
n8n_nodejs_gc_duration_seconds_bucket{le=“0.01”,kind=“incremental”} 15
n8n_nodejs_gc_duration_seconds_bucket{le=“0.1”,kind=“incremental”} 15
n8n_nodejs_gc_duration_seconds_bucket{le=“1”,kind=“incremental”} 15
n8n_nodejs_gc_duration_seconds_bucket{le=“2”,kind=“incremental”} 15
n8n_nodejs_gc_duration_seconds_bucket{le=“5”,kind=“incremental”} 15
n8n_nodejs_gc_duration_seconds_bucket{le=“+Inf”,kind=“incremental”} 15
n8n_nodejs_gc_duration_seconds_sum{kind=“incremental”} 0.011959950029850007
n8n_nodejs_gc_duration_seconds_count{kind=“incremental”} 15
n8n_nodejs_gc_duration_seconds_bucket{le=“0.001”,kind=“major”} 0
n8n_nodejs_gc_duration_seconds_bucket{le=“0.01”,kind=“major”} 16
n8n_nodejs_gc_duration_seconds_bucket{le=“0.1”,kind=“major”} 19
n8n_nodejs_gc_duration_seconds_bucket{le=“1”,kind=“major”} 19
n8n_nodejs_gc_duration_seconds_bucket{le=“2”,kind=“major”} 19
n8n_nodejs_gc_duration_seconds_bucket{le=“5”,kind=“major”} 19
n8n_nodejs_gc_duration_seconds_bucket{le=“+Inf”,kind=“major”} 19
n8n_nodejs_gc_duration_seconds_sum{kind=“major”} 0.19290066093206407
n8n_nodejs_gc_duration_seconds_count{kind=“major”} 19

# HELP n8n_version_info n8n version info.

TYPE n8n_version_info gauge

n8n_version_info{version=“v1.121.3”,major=“1”,minor=“121”,patch=“3”} 1

# HELP n8n_instance_role_leader Whether this main instance is the leader (1) or not (0).

TYPE n8n_instance_role_leader gauge

n8n_instance_role_leader 1

# HELP n8n_active_workflow_count Total number of active workflows.

TYPE n8n_active_workflow_count gauge

n8n_active_workflow_count 0

I let chatgpt have a look at it, and it said all is fine.

1 Like

hi @hantar ,

have you tried using another browser ?

For me I’ve went to the page and yes first time it took time but after that it’s a bit faster (4 seconds, 1 second, and 7 seconds … I have tried multiple times).

Hi @appunitsai

I also tried in firefox and it is the same there.

Today I got the 520 as a notification in the UI of n8n. I don’t know if I am getting it because n8n is having problems or if it is due to my reverse proxy now?

To find out you can just disable the reverse proxy and access n8n directly via localhost..
If it works normally, then we’ve identified that the proxy is the problem.

Another thing to check is the database. You can use SQLite for now just to test.

In short, you can set up a simple, clean installation without any proxies or extra configurations..

Once everything is working correctly, start adding the services one by one, that way if a problem appears, it will be easier to identify where it comes from.

1 Like

Can you increase the resources a bit (just for testing)? I honestly think it’s because the resources, you are running docker + images with just 2GB of RAM which could lead to such slowness, I’m suggesting if you don’t want to waste money on server resources, that you run all the setup locally with the same resources that you have mentioned and see if you are experimenting slowness then that’s the issue for sure.

1 Like

Okay! I have been testing locally through my local ip address then it is working just fine without any issues!
Then the problem should be with traefik or cloudflare. However double checking the config it all seems just fine. Maybe I am overlooking something. Will report back when I have more details.

2 Likes