Oclif timeout on startup

During the oinitial start I get timeout:

0|n8n      | You have triggered an unhandledRejection, you may have forgotten to catch a Promise rejection:
0|n8n      | Error: timed out
0|n8n      |     at error (/home/hserge/.local/share/pnpm/global/5/.pnpm/@[email protected]/node_modules/@oclif/core/lib/errors/error.js:34:15)
0|n8n      |     at /home/hserge/.local/share/pnpm/global/5/.pnpm/@[email protected]/node_modules/@oclif/core/lib/flush.js:13:73
0|n8n      |     at flush (/home/hserge/.local/share/pnpm/global/5/.pnpm/@[email protected]/node_modules/@oclif/core/lib/flush.js:25:5)
0|n8n      | You have triggered an unhandledRejection, you may have forgotten to catch a Promise rejection:
0|n8n      | Error: timed out
0|n8n      |     at error (/home/hserge/.local/share/pnpm/global/5/.pnpm/@[email protected]/node_modules/@oclif/core/lib/errors/error.js:34:15)
0|n8n      |     at /home/hserge/.local/share/pnpm/global/5/.pnpm/@[email protected]/node_modules/@oclif/core/lib/flush.js:13:73
0|n8n      |     at flush (/home/hserge/.local/share/pnpm/global/5/.pnpm/@[email protected]/node_modules/@oclif/core/lib/flush.js:25:5)

As I understand, this happens only on startup. This is not a workflow related.
I tried to research but couldn’t find any reports of this issue.

Information on your n8n setup

  • n8n version: 1.93
  • Database (default: SQLite): 3.45.1
  • n8n EXECUTIONS_PROCESS setting (default: own, main): I removed it from config
  • Running n8n via (Docker, npm, n8n cloud, desktop app): direct insta;lation
  • Operating system: Ubuntu 24

If you testing atm, i’d recommend using dockers, it seems like an env issue which is avoided normally by using dockers. I see no reference to n8n in the error either, what version node u using?

I use node v22.15.0

1 Like

20.19.0 is default with dockers image for n8n, that’s for n8n version 1.91.3, I would probably recommending using a built image by them dockers etc, or try downgrade, it could be some changes made which aren’t compatible.

Hope this helps

Thanks for your prompt response.

I guess using the Docker image is the most acceptable solution. I see several other issues as well, for example, Settings > Timezone gives 404, and the timezone list is empty, and people suggest using Docker, where it works.

I’ll give Docker a try, which will also help to put it in Kubernetes for automatic scaling, queuing, etc.

Yes, if u want to run k8s locally it’s fine, can be good but I need to learn more on k8s, might setup soon, but I just use dockers, this is my

version: '3.8'

volumes:
  db_storage:
  n8n_storage:
  redis_storage:
  pgadmin_data:
  prometheus_data:
  grafana_data:

x-shared: &shared
  restart: always
  image: docker.n8n.io/n8nio/n8n
  environment:
    - DB_TYPE=postgresdb
    - DB_POSTGRESDB_HOST=postgres
    - DB_POSTGRESDB_PORT=5432
    - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
    - DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
    - DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
    ##- EXECUTIONS_MODE=queue
    ##- QUEUE_MODE=redis
    ##- QUEUE_BULL_REDIS_HOST=redis
    ##- QUEUE_BULL_REDIS_PORT=6379
    ##- QUEUE_HEALTH_CHECK_ACTIVE=true
    - N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}
    - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
    - N8N_HOST=localhost
    - N8N_PORT=5678
    - N8N_PROTOCOL=http
    - N8N_BASIC_AUTH_ACTIVE=true
    - N8N_BASIC_AUTH_USER=${N8N_USER}
    - N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
    - N8N_LOG_OUTPUT=file
    - n8n.log.level=debug
    - N8N_METRICS=true
    - N8N_RUNNERS_ENABLED=true	
  links:
    - postgres
    - redis
  volumes:
    - n8n_storage:/home/node/.n8n
  depends_on:
    redis:
      condition: service_healthy
    postgres:
      condition: service_healthy

services:
  postgres:
    image: postgres:16
    restart: always
    environment:
      - POSTGRES_USER
      - POSTGRES_PASSWORD
      - POSTGRES_DB
      - POSTGRES_NON_ROOT_USER
      - POSTGRES_NON_ROOT_PASSWORD
    volumes:
      - db_storage:/var/lib/postgresql/data
      - ./init-data.sh:/docker-entrypoint-initdb.d/init-data.sh
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
      interval: 5s
      timeout: 5s
      retries: 10

  redis:
    image: redis:6-alpine
    restart: always
    volumes:
      - redis_storage:/data
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 5s
      timeout: 5s
      retries: 10

  n8n:
    <<: *shared
    ports:
      - 5678:5678
    user: root

  n8n-worker:
    <<: *shared
    command: worker
    depends_on:
      - n8n

  redisinsight:
    image: redislabs/redisinsight:1.14.0
    container_name: redisinsight
    ports:
      - "8001:8001"
    restart: always
    depends_on:
      redis:
        condition: service_healthy

  bull-board:
    build:
      context: .
      dockerfile: Dockerfile.bullboard
    container_name: bull-board
    ports:
      - "3002:3002"
    environment:
      - REDIS_HOST=redis
    depends_on:
      redis:
        condition: service_healthy

  pgadmin:
    image: dpage/pgadmin4
    container_name: pgadmin
    restart: always
    ports:
      - "5050:80"
    volumes:
      - pgadmin_data:/var/lib/pgadmin
    environment:
      PGADMIN_DEFAULT_EMAIL: [email protected]
      PGADMIN_DEFAULT_PASSWORD: admin
    depends_on:
      postgres:
        condition: service_healthy

  prometheus:
    image: prom/prometheus
    container_name: prometheus
    restart: always
    ports:
      - "9090:9090"
    volumes:
      - prometheus_data:/prometheus
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
    depends_on:
      - n8n

  grafana:
    image: grafana/grafana
    container_name: grafana
    restart: always
    ports:
      - "3003:3000"
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=admin
    depends_on:
      - prometheus

I added a few extra, like pgadmin, grafana, prom, I disabled queue mode, though I have works in this, but u can reference it all here n8n-hosting/docker-compose/subfolderWithSSL at main · n8n-io/n8n-hosting · GitHub

Also some people use ngrok for temp domain, I use cloudflare with a domain I own for my prod, this will use localhost my sample, so u can’t do callback urls for some services.

Some people also use railway, quick easy deployments and alredy give u domain name etc.

Let me know if u have any questions ill try help more if i can

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.