Error Workflow triggered with 'seems to be' successful completed workflows

Hi all,

I’m creating this question following this thread which seems to be similar to my issue:

So, I got few error workflows being triggered but when I look at their execution, most of the time, they were completed successfully. They are either in Failed status with 0 node in error, or not showing being executed.

I had a look at logs from main & workers pods and I don’t see anything suspicious.
So I was thinking that maybe the issue could be linked to Redis.

Here is the redis configuration:

    maxmemory 3gb
    maxmemory-policy allkeys-lru 
    loglevel verbose

Here is REDIS related configuration:

  QUEUE_BULL_REDIS_HOST: n8n-redis-master
  QUEUE_BULL_REDIS_PORT: "6379"
  QUEUE_BULL_REDIS_DB: "0"
  QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD: "30000"

And I’ve noticed that there are few blocked clients, not sure it’s related:

# Clients
connected_clients:22
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:20553
client_recent_max_output_buffer:0
blocked_clients:5
tracking_clients:0
clients_in_timeout_table:5

Do you think this could be linked?
Let me know what can be useful for investigating this issue.

Information on your n8n setup

  • n8n version:0.228.2
  • Database you’re using (default:PostgreSQL): PostgreSQL
  • Running n8n with the execution process [own(default)]: default
  • Running n8n via [npm]: k8s with 1 main (16GB/0.5->8CPU), 1 webhook (4GB/0.5->1CPU), 5-35 workers (8GB/0.5->8CPU), 1 redis (4GB/0.1->1CPU)

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.