Getting error on valkey sometimes

Description of the problem / error / question

I’m encountering a Redis memory issue when running my workflow. The execution stops with the following error:

OOM command not allowed when used memory > 'maxmemory'. script: d40dd344d4fef242eb54ac203e24156f1db71090, on @user_script:13.

It seems Redis exceeds the configured maxmemory limit, which prevents further commands from being executed.

However, the strange thing is that I don’t have that many workflow executions or stored data that could realistically fill up Redis memory. It feels like memory isn’t being released properly, or something is stuck in Redis. What should i do and how can i fix it?


Error message

OOM command not allowed when used memory > 'maxmemory'. script: d40dd344d4fef242eb54ac203e24156f1db71090, on @user_script:13.

Workflow details

any workflow

Expected behavior

The workflow should execute successfully without Redis memory errors. Data should be written to or read from Redis without exceeding the memory limit.


Information about your n8n setup

  • n8n version: 1.107.4

  • Database (default: SQLite): postgres

  • n8n EXECUTIONS_PROCESS setting (default: own, main): defauls

  • Running n8n via (Docker, npm, n8n cloud, desktop app): aws cloud

  • Operating system:


:speech_balloon:

Additional context

It looks like Redis is reaching its maxmemory limit. Possible things to check or try:

  • Review Redis configuration (maxmemory, maxmemory-policy);

  • Clear old n8n cache or keys stored in Redis;

  • Increase available memory or switch policy to allkeys-lru or volatile-lru;

  • Verify that the workflow is not storing overly large data in Redis (e.g., big JSON objects or execution data).


Would you like me to make this message a bit more concise and formatted specifically for posting directly on the n8n community forum (with markdown and proper tone for support)?

Hi @artemik83,

Have you checked the Redis log and memory usage in time to review if it’s due to the payload from n8n?

I would suggest to clear redis cache but would first recommend to check the above first.

Thank you

Yes, there is a spike at some random point in time, but it is still below the set limit of 80%.
I have already cleared all keys from redis, but the error persists.

Hi @artemik83,

I would recommend to follow the below by running each step and retrying the workflow execution:

  • Confirm where memory is being used (redis-cli info memory)
    • Check for used_memory_human, used_memory_peak_human, maxmemory_human and mem_fragmentation_ratio
      • If ratio bigger than 1.5 you have fragmentation
      • If used_memory is near maxmemory but the db size is small then you might have issues with scripts, buffers or stale keys
  • Flush cached scripts (redis-cli script flush)
  • Purge orphaned keys, checking first which are taking space
    • redis-cli keys “bull:" | wc -l
    • redis-cli keys "n8n:” | wc -l
    • You should flushall if you’re using Redis only for n8n, if not run the below:
      • redis-cli --scan --pattern “bull:*” | xargs redis-cli del
  • Adjust Redis config
    • Check for maxmemory
    • Check for maxmemory-policy noeviction
    • Evict old keys (maxmemory-policy allkeys-lru)
    • Restart Redis (sudo systemctl restart redis)

Please let me know if the above worked.

Thank you.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.