Workflows Stuck in "Queued" Status After ~60 Seconds of Execution Time

Hey all,

I’m experiencing a critical issue with my n8n instance hosted on Elest.io. My workflows are getting stuck in “queued” status, making the entire system unusable for production.

Issue Description

My workflow looks like this:

When executed:

  1. It starts normally
  2. After approximately 60 seconds of execution time, it transitions to “queued” status
  3. New executions immediately show as “queued” without even starting
  4. In logs, I see: Error with Webhook-Response for execution "XXXX": "The execution was cancelled"

My Environment

  • n8n version: 1.80.4
  • Running in queue mode with one main and one worker
  • PostgreSQL 15 database
  • Redis for queue management
  • Docker Compose setup

Temporary Workarounds I’ve Found

The only way I’ve been able to temporarily fix this is by:

  1. Duplicating the affected workflow
  2. Setting a new webhook URL
  3. Deleting the original workflow (which removes those queued executions)
  4. Activating the new workflow

This works for a while, but the issue inevitably returns.

Related GitHub Issues

After extensive research, I found these potentially related issues:

What I’ve Tried

  • Upgrading server RAM and CPU resources
  • Implementing Redis flushing on restart
  • Setting longer timeouts in docker-compose (N8N_WEBHOOKS_TIMEOUT: 75000)
  • Configuring task runners (N8N_RUNNERS_ENABLED: “true”)
  • Adjusting queue settings (QUEUE_PROCESS_TIMEOUT: 420000)
  • Explicitly setting the Redis DB number

Docker Compose Configuration

version: "3.8"

volumes:
  db_data:
  n8n:
  redis_data:

x-shared: &shared
  restart: always
  image: n8nio/n8n:${SOFTWARE_VERSION_TAG}
  environment:
    DB_TYPE: postgresdb
    DB_POSTGRESDB_HOST: postgres
    DB_POSTGRESDB_PORT: 5432
    DB_POSTGRESDB_DATABASE: ${POSTGRES_DB}
    DB_POSTGRESDB_USER: ${POSTGRES_USER}
    DB_POSTGRESDB_PASSWORD: ${SOFTWARE_PASSWORD}
    EXECUTIONS_MODE: queue
    QUEUE_BULL_REDIS_HOST: redis
    QUEUE_BULL_REDIS_DB: 0
    QUEUE_HEALTH_CHECK_ACTIVE: "true"
    QUEUE_PROCESS_TIMEOUT: 420000
    N8N_QUEUE_BULL_CONCURRENCY: 10
    N8N_WEBHOOKS_TIMEOUT: 75000
    WEBHOOK_TUNNEL_URL: https://${DOMAIN}
    WEBHOOK_URL: https://${DOMAIN}
    N8N_BASIC_AUTH_ACTIVE: "true"
    N8N_BASIC_AUTH_USER: ${N8N_BASIC_AUTH_USER}
    N8N_BASIC_AUTH_PASSWORD: ${SOFTWARE_PASSWORD}
    N8N_HOST: ${DOMAIN}
    N8N_EMAIL_MODE: "smtp"
    N8N_SMTP_HOST: ${SMTP_HOST}
    N8N_SMTP_PORT: ${SMTP_PORT}
    N8N_SMTP_USER: " "
    N8N_SMTP_PASS: " "
    N8N_SMTP_SENDER: ${SMTP_FROM_EMAIL}
    N8N_SMTP_SSL: "false"
    NODE_TLS_REJECT_UNAUTHORIZED: 0
    EXECUTIONS_DATA_PRUNE: ${EXECUTIONS_DATA_PRUNE}
    EXECUTIONS_DATA_MAX_AGE: ${EXECUTIONS_DATA_MAX_AGE}
    N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
    N8N_DEFAULT_BINARY_DATA_MODE: filesystem
    N8N_PAYLOAD_SIZE_MAX: 32
    N8N_RUNNERS_ENABLED: "true"
    N8N_RUNNERS_MODE: "internal"

Recent Logs

My logs show executions being cancelled immediately after queueing:

app-n8n-1: Enqueued execution 4122 (job 1580)
app-n8n-1: Error with Webhook-Response for execution "4122": "The execution was cancelled"
app-n8n-1: The execution was cancelled
app-n8n-1: The execution was cancelled

Any help from the community would be greatly appreciated. I’m happy to provide additional information, join a call, or work with anyone who has experienced something similar.

Thank you!

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
  1. Are you consistently seeing this issue after exactly 60 seconds, or does the timing vary?
  2. Have you monitored resource usage (CPU, memory, disk I/O) during these executions?
  3. Are there any specific nodes in your workflow that might be resource-intensive or prone to timeouts?

(There is a huge possiblity that resourse you are hosting is not enough, can i get your server config)

Hey @Yo_its_prakash

Server Config

LARGE-4C-8G (4 VCPU s - 8 GB RAM - 40 GB storage) Provider: hetzner

Server Stats (last 3 days)

Does anyone from the forum or n8n support team have any ideas?

Do you have valid https connection with SSL cert? If you don’t, webhooks will have problems.

Hey @Daniel_Lamphere

Yes I do, everything was working fine for many days. Then one day it just started doing this.

Additional information i noticed these errors

Logs

app-n8n-1         | Editor is now accessible via:
app-n8n-1         | https://n8n-inscope-u20621.vm.elestio.app
app-n8n-worker-1  | Worker errored while running execution 4121 (job 1579)
app-n8n-worker-1  | Worker failed to find data for execution 4121 (job 1579) (execution 4121)
app-n8n-1         | Execution 4121 (job 1579) failed
app-n8n-1         | Error: Worker failed to find data for execution 4121 (job 1579)
app-n8n-1         |     at JobProcessor.processJob (/usr/local/lib/node_modules/n8n/dist/scaling/job-processor.js:78:19)
app-n8n-1         |     at processTicksAndRejections (node:internal/process/task_queues:95:5)
app-n8n-1         |     at Queue.<anonymous> (/usr/local/lib/node_modules/n8n/dist/scaling/scaling.service.js:115:17)
app-n8n-1         | 
app-n8n-1         | Enqueued execution 4147 (job 1581)
app-n8n-worker-1  | Worker started execution 4147 (job 1581)
app-n8n-worker-1  | (node:7) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.
app-n8n-worker-1  | (Use `node --trace-warnings ...` to show where the warning was created)
app-n8n-1         | Execution 4147 (job 1581) finished successfully
app-n8n-worker-1  | Worker finished execution 4147 (job 1581)
app-n8n-1         | Enqueued execution 4148 (job 1582)
app-n8n-worker-1  | Worker started execution 4148 (job 1582)
app-n8n-1         | Enqueued execution 4149 (job 1583)
app-n8n-worker-1  | Worker started execution 4149 (job 1583)
app-n8n-worker-1  | Worker finished execution 4148 (job 1582)
app-n8n-1         | Execution 4148 (job 1582) finished successfully
db                | 2025-03-01 20:01:20.429 UTC [27] LOG:  checkpoint starting: time
cache             | 1:M 01 Mar 2025 20:01:21.022 * 100 changes in 300 seconds. Saving...
cache             | 1:M 01 Mar 2025 20:01:21.023 * Background saving started by pid 373
cache             | 373:C 01 Mar 2025 20:01:21.029 * DB saved on disk
cache             | 373:C 01 Mar 2025 20:01:21.030 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
cache             | 1:M 01 Mar 2025 20:01:21.125 * Background saving terminated with success
db                | 2025-03-01 20:01:23.559 UTC [27] LOG:  checkpoint complete: wrote 32 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=3.118 s, sync=0.005 s, total=3.130 s; sync files=16, longest=0.002 s, average=0.001 s; distance=140 kB, estimate=140 kB
app-n8n-1         | Enqueued execution 4150 (job 1584)
app-n8n-1         | Error with Webhook-Response for execution "4149": "The execution was cancelled"
app-n8n-1         | The execution was cancelled
app-n8n-1         | The execution was cancelled
app-n8n-1         | Enqueued execution 4151 (job 1585)
app-n8n-1         | Enqueued execution 4152 (job 1586)
app-n8n-1         | Enqueued execution 4153 (job 1587)
db                | 2025-03-01 20:06:20.638 UTC [27] LOG:  checkpoint starting: time
db                | 2025-03-01 20:06:23.283 UTC [27] LOG:  checkpoint complete: wrote 27 buffers (0.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=2.629 s, sync=0.006 s, total=2.645 s; sync files=15, longest=0.003 s, average=0.001 s; distance=78 kB, estimate=134 kB
app-n8n-1         | Enqueued execution 4154 (job 1588)

After so many hours spent looking at this and trying to debug the issue the only thing that helped is switching from queue mode to regular mode.

This is not an ideal solution, but I really don’t know what else I can do.

@AliFarahat Did you figure out what was happening? I’m experiencing the same issue and have applied the same workaround, but I’m concerned this might pose a problem in the future.

Hey @Vitor_Fachini ,

No idea, I have spent so much time on this. I even tried going with the official hosted n8n version and I still faced the same issue with my workflows.

We need some official support. I sent them an email sometime back and still did not get a response

Hi @AliFarahat
Sorry for the late reply, we are currently trying to manage a huge support requests backlog with a very small team… (we’re also hiring btw)

Having had a look at your issue, it sounds to me like a problem with your Respond to Webhook node and the execution cannot finish because of it, which then queues up any other executions being triggered.
Could you please share your workflow so we can see what you’re doing in the last node? (You’ve only provided a screenshot at the top)

Tip for sharing your workflow in the forum

Pasting your n8n workflow


Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.

```
<your workflow>
```

Make sure that you’ve removed any sensitive information from your workflow and include dummy data or pinned data as much as you can!


Thanks :pray:

This line implies that there is an unhandled error. do you see any other errors just before that line?

Hello @ria and @netroy ,

thanks for joining the convo. Let me provide more context and share some workflows in this post.

The issue is not confined to one workflow, it is effecting every workflow that runs past the 60s execution mark. It also effects executions in regular or queue mode.

When running in regular causes the service to crash and I have to manually restart it.

When running in queue mode the server starts adding new queued executions every few seconds.

The error is intermittent, it effects probably 1 of 10 executions (and only if they take longer than 60 secs.

Please are some workflows where I faced errors

1. Original Workflow: This workflow only responds to the webhook after all nodes have executed.

I can’t attach other workflows. The number of chars is greater than the limit for the forum

In the the other workflows the webhook is responded to immediately then the AI generated content is delivered via webhook by an HTTP node.

Happy to share the other nodes, but dont know how…