I have a workflow that previously worked flawlessly. It searches for emails with specific labels from the past month and downloads the attachments. The workflow still completes successfully, but after the final HTTP Request node, it triggers the Gmail “Search Messages” node again unexpectedly and it never finishing. Why would this happen?
Hi @madrian, this is very strange, as n8n does not work that way, as the last node is triggering a specific node in between. Can you share information about your n8n instance?
Yes, it’s strange, and I don’t remember it being a problem before.
I run this workflow on the first day of each month. Today, on May 1st, I noticed that it failed due to a timeout error.
Task request timed out after 60 seconds Your Code node task was not matched to a runner within the timeout period. This indicates that the task runner is currently down, or not ready, or at capacity, so it cannot service your task. If you are repeatedly executing Code nodes with long-running tasks across your instance, please space them apart to give the runner time to catch up. If this does not describe your use case, please open a GitHub issue or reach out to support. If needed, you can increase the timeout using the N8N_RUNNERS_TASK_REQUEST_TIMEOUTenvironment variable.
n8n version 2.15.0 (Self Hosted) Stack trace Error: Task request timed out after 60 seconds at LocalTaskRequester.requestExpired (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:304:17) at LocalTaskRequester.onMessage (/usr/local/lib/node_modules/n8n/src/task-runners/task-managers/task-requester.ts:272:10) at TaskBroker.handleRequestTimeout (/usr/local/lib/node_modules/n8n/src/task-runners/task-broker/task-broker.service.ts:120:50) at Timeout. (/usr/local/lib/node_modules/n8n/src/task-runners/task-broker/task-broker.service.ts:107:9) at listOnTimeout (node:internal/timers:605:17) at processTimers (node:internal/timers:541:7)
I thought it was just a temporary error, so I decided to update my n8n instance. The update went fine, but now the workflow behaves like I described in my first message. My n8n version:
2.18.5
madrian@debian:~/docker/n8n$ cat docker-compose.yml
services:
task-runners:
image: n8nio/runners:latest
container_name: n8n-runners
environment:
- N8N_RUNNERS_TASK_BROKER_URI=${N8N_RUNNERS_TASK_BROKER_URI}
- N8N_RUNNERS_AUTH_TOKEN=${N8N_RUNNERS_AUTH_TOKEN}
# etc.
depends_on:
- n8n
networks:
- n8n-network
n8n:
container_name: n8n
image: docker.n8n.io/n8nio/n8n
restart: always
command: ["start"] # <-- remove --tunnel
ports:
- "5678:5678" # optional; remove if you don't need direct host access
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
# Set this to your ngrok public URL (or reserved domain)
- WEBHOOK_URL=${WEBHOOK_URL}
- N8N_PROXY_HOPS=1
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
- N8N_RUNNERS_ENABLED=true
- N8N_RUNNERS_MODE=${N8N_RUNNERS_MODE}
- N8N_RUNNERS_AUTH_TOKEN=${N8N_RUNNERS_AUTH_TOKEN}
- N8N_RUNNERS_BROKER_LISTEN_ADDRESS=${N8N_RUNNERS_BROKER_LISTEN_ADDRESS}
- N8N_RUNNERS_BROKER_PORT=${N8N_RUNNERS_BROKER_PORT}
# PostgreSQL configuration
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE}
- DB_POSTGRESDB_USER=${DB_POSTGRESDB_USER}
- DB_POSTGRESDB_PASSWORD=${DB_POSTGRESDB_PASSWORD}
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files
depends_on:
- postgres
networks:
- n8n-network
postgres:
container_name: postgres
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${DB_POSTGRESDB_USER}
- POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD}
- POSTGRES_DB=${DB_POSTGRESDB_DATABASE}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- n8n-network
caddy:
image: caddy:2-alpine
container_name: n8n-webhooks-caddy
restart: unless-stopped
depends_on:
- n8n
# NOTE: no host ports here -> no conflict with occupied 8080 on the host
expose:
- "8080"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- n8n-network
ngrok:
image: ngrok/ngrok:latest
container_name: n8n-webhooks-ngrok
restart: unless-stopped
depends_on:
- caddy
environment:
NGROK_AUTHTOKEN: "${NGROK_AUTHTOKEN}"
command:
- http
- caddy:8080
- --url=baylee-azotic-eruptively.ngrok-free.dev
ports:
- "4040:4040" # optional ngrok inspector
networks:
- n8n-network
volumes:
n8n_data:
external: true
postgres_data:
external: true
caddy_data:
caddy_config:
networks:
n8n-network:
driver: bridge
My workflow:
I think @Jekylls would be able to debug this.
Have you tried reinstalling n8n? As a last node, executing a middle node without any trace randomly is far too strange.
Sounds like you’ve got a loop somewhere—either the HTTP Request node is triggering the Gmail node again, or there’s a circular reference in your workflow that wasn’t there before.
Also check if the Gmail node has “Continue on error” enabled—if the HTTP Request is failing silently, Gmail might be retrying automatically. Look at your execution logs and see what the HTTP Request is actually returning. That’ll tell you if it’s erroring out and causing a retry loop.
