I’m running local version 1.39.1. Webhook triggers had been working, but recently stopped.
When I hit any webhook, I get:
{
“code”: 0,
“message”: “Workflow Webhook Error: Workflow could not be started!”
}
I just tried creating a blank workflow, added a webhook trigger (GET), put it in test mode and tried to load the trigger. I get the same response as one of the live triggers that had been working.
In the node I see this error:
{
“errorMessage”: “Cannot read properties of undefined (reading ‘getNode’)”,
“errorDetails”: {},
“n8nDetails”: {
“n8nVersion”: “1.39.1 (Self Hosted)”,
“binaryDataMode”: “default”,
“stackTrace”: [
“TypeError: Cannot read properties of undefined (reading ‘getNode’)”,
" at Object.webhook (/root/.n8n/nodes/node_modules/n8n-nodes-base/nodes/Webhook/Webhook.node.ts:185:64)“,
" at Workflow.runWebhook (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:665:38)”,
" at Object.executeWebhook (/usr/local/lib/node_modules/n8n/dist/WebhookHelpers.js:226:48)“,
" at processTicksAndRejections (node:internal/process/task_queues:95:5)”,
" at /usr/local/lib/node_modules/n8n/dist/TestWebhooks.js:99:37"
]
}
}
From the server logs for the live workflow:
{“__type”:“$$EventMessageWorkflow”,“id”:“9502b05e-e135-4acf-adbe-a7239667df2b”,“ts”:“2024-05-17T09:40:53.089-04:00”,“eventName”:“n8n.workflow.failed”,“message”:“n8n.workflow.failed”,“payload”:{“executionId”:“1000”,“success”:false,“userId”:“f023b88d-d1e1-4a4f-b5e5-b7f6d6545696”,“workflowId”:“gKQAo2K0Eqa6RVZT”,“isManual”:false,“workflowName”:“Catch FD Wayfair Order Cancel”,“lastNodeExecuted”:“Receive Webhook from FD”,“errorNodeType”:“n8n-nodes-base.webhook”,“errorMessage”:“Cannot read properties of undefined (reading ‘getNode’)”}}
Thanks, I’m really enjoying n8n, it’s a great product.
I’m using your official image. I installed is as part of the Seatable.io composer stack. The composer file is below.
The only community module I have installed is Seatable:
I have a suspicion as to the source of the problem. I have seatable running at https://seatable.aspereo.com. The original Seatable provider compose file used https://seatable.aspereo.com:6231 to access n8n. I changed this to https://n8n.aspereo.com in the caddy (reverse-proxy) config and the compose file. While n8n is accessible at the new URL, there is a setting somewhere that causing the webhook URLs to use the old (http which is not accessible. I simply change it to the accessible URL. I’m thinking this may be causing the issue.
At any rate, here’s the compose. Any insight would be greatly appreciated.
services:
caddy:
labels:
caddy: n8n.aspereo.com
caddy.reverse_proxy: n8n:5678
ports:
- ${N8N_PORT:-6231}:${N8N_PORT:-6231} # original
n8n-postgres:
image: ${N8N_POSTGRES_IMAGE:-postgres:11}
restart: unless-stopped
container_name: n8n-postgres
environment:
- POSTGRES_USER=${POSTGRES_USER:-root}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?Variable is not set or empty}
- POSTGRES_DB=${POSTGRES_DB:-n8n}
- POSTGRES_NON_ROOT_USER=${POSTGRES_NON_ROOT_USER:-non_root_user}
- POSTGRES_NON_ROOT_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD:?Variable is not set or empty}
volumes:
- "/opt/n8n-postgres:/var/lib/postgresql/data"
- "./n8n-init-data.sh:/docker-entrypoint-initdb.d/init-data.sh"
healthcheck:
test:
[
"CMD-SHELL",
"pg_isready -h localhost -U ${POSTGRES_USER:-root} -d ${POSTGRES_DB:-n8n}",
]
interval: 5s
timeout: 5s
retries: 10
networks:
- backend-n8n-net
n8n:
image: ${N8N_IMAGE:-docker.n8n.io/n8nio/n8n}
restart: unless-stopped
container_name: n8n
user: root
environment:
- N8N_HOST=${N8N_HOST:-n8n}
- N8N_PORT=5678
- N8N_PROTOCOL=${N8N_PROTOCOL:-http}
- N8N_EMAIL_MODE=smtp
- N8N_SMTP_HOST=smtp.sendgrid.net
- N8N_SMTP_USER=apikey
- N8N_SMTP_PASS=
- [email protected]
- N8N_SMTP_PORT=465
- NODE_ENV=production
- WEBHOOK_URL=${SEATABLE_SERVER_PROTOCOL:-https}://${SEATABLE_SERVER_HOSTNAME}:${N8N_PORT:-6231}/
- GENERIC_TIMEZONE=${TIME_ZONE}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY:?Variable is not set or empty, might be already set in n8n config file}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=n8n-postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB:-n8n}
- DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER:-non_root_user}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
volumes:
- "/opt/n8n:/root/.n8n"
labels:
caddy: ${N8N_HOST}
caddy.reverse_proxy: "{{upstreams 5678}}"
depends_on:
n8n-postgres:
condition: service_healthy
networks:
- frontend-net
- backend-n8n-net
db-backup:
container_name: db-backup
image: tiredofit/db-backup
volumes:
- ${BACKUP_DIR}/database:/backup
#- ./post-script.sh:/assets/custom-scripts/post-script.sh
environment:
- TIMEZONE=America/New_York
- CONTAINER_NAME=db-backup
- CONTAINER_ENABLE_MONITORING=FALSE
# - DEBUG_MODE=TRUE
- BACKUP_JOB_CONCURRENCY=1 # Only run one job at a time
- DEFAULT_CHECKSUM=NONE # Don't create checksums
- DEFAULT_COMPRESSION=ZSTD # Compress all with ZSTD
- DEFAULT_BACKUP_INTERVAL=1440 # Backup every 1440 minutes
- DEFAULT_BACKUP_BEGIN=0000 # Start backing up at midnight
- DEFAULT_CLEANUP_TIME=8640 # Cleanup backups after a week
- USER_DBBACKUP=1000 # brian
- GROUP_DBBACKUP=1000 # brian
- DB01_TYPE=postgres
- DB01_HOST=n8n-postgres
- DB01_NAME=${POSTGRES_DB:-n8n}
- DB01_USER=${POSTGRES_NON_ROOT_USER:-non_root_user}
- DB01_PASS=${POSTGRES_NON_ROOT_PASSWORD}
#- DB01_BACKUP_INTERVAL=30 # (override) Backup every 30 minutes
- DB01_BACKUP_BEGIN=+1 # (override) Backup starts immediately
#- DB01_CLEANUP_TIME=180 # (override) Cleanup backups they are older than 180 minutes
#- DB01_CHECKSUM=SHA1 # (override) Create a SHA1 checksum
#- DB01_COMPRESSION=GZ # (override) Compress with GZIP
- DB02_TYPE=mysql
- DB02_HOST=mariadb
- DB02_NAME=ALL
- DB02_USER=root
- DB02_PASS=${SEATABLE_MYSQL_ROOT_PASSWORD}
- DB02_BACKUP_BEGIN=+1
restart: unless-stopped
networks:
- backend-n8n-net
- backend-seatable-net
networks:
frontend-net:
name: frontend-net
backend-n8n-net:
name: backend-n8n-net
backend-seatable-net:
name: backend-seatable-net
type or paste code here
This changed the webhook URLs to the correct host, but does not affect the issue. I create a new workflow with a new webhook trigger and testing yields the same error.