Workflows Randomly Get Cancelled for no Clear Reason

Describe the problem/error/question

I feel like I’m facing a pretty big bug potentially within my N8n workspace. A bunch of workflows are just randomly getting canceled in workflows that just haven’t been edited or touched in many days.

This was happening on version 1.98.1, and the version was working perfectly fine until late last night when all of a sudden I wake up to multiple workflows just being spammed with canceled executions. Of course, no one was cancelling the executions. I’ve upgraded to the latest n8n version to no improvement.

It doesn’t even show that anything went wrong within the execution, it shows that all the nodes were working perfectly, and I can’t figure out why this is happening. It also pushing to the error workflow, but with to clear information.

I cannot for the life of me figure out what’s going wrong :confused: I’ve tried recreating the affected workflows with no success

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • **n8n version:1.99.1
  • **Database (default: SQLite):Postgres
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):main
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):docker
  • **Operating system:Ubuntu 22.04LTS

Setup also uses queue mode w/ webhook processor and one worker.

Even looking at docker logs, it’s just showing it was cancelled for no clear reason

Also just checked another install of n8n, completely separate sever, completely separate environment.

This time, it’s showing an extreme amount of errors on workflows that have completed successfully. Also showing an extreme amount of time that the workflow ran. The workflow below on average will not take more than 100ms, yet it’s showing absolutely crazy runtime.

Also, none of these errors triggered the error workflow set.

Have not experienced anything like this in the past 6/mo of using n8n

I assume that Supabase connection limits or timeouts during heavy image processing.

When your worker is generating images, it’s probably:

  • Hitting Supabase connection limits
  • Losing DB connection during long-running tasks
  • Unable to update execution status → n8n cancels the execution

Quick checks:

  1. Monitor your Supabase dashboard during those cancellation times - look for connection spikes
  2. Check if you’re hitting any rate limits or connection pool exhaustion

Try this:

# Reduce DB pressure
DB_POSTGRESDB_POOL_SIZE: 3
EXECUTIONS_PROCESS: own  # isolate heavy tasks
N8N_CONCURRENT_EXECUTIONS: 1  # limit concurrent runs

Supabase + queue mode + heavy image processing = connection chaos. The worker loses touch with the database and n8n just gives up on the execution.

What’s your Supabase plan? And do you see any connection/performance alerts in your Supabase dashboard when this happens?

Thanks for your response!

It’s nothing to do with Supabase. My Supabase instance is pretty rock solid. It is also self hosted and doesn’t show errors anywhere.

When you look at where the workflow is stopping, it’s after a successful image generation. It just needs to convert the Base64 string to a file, which it gets the string, but just cancels the workflow for some reason and never actually attempts to convert it to an image

Thanks for that clarification! Now I’m leaning towards a memory issue during Base64 conversion.

The image generation works fine, but when your worker tries to convert that Base64 string to a file locally, it’s probably hitting memory limits - maybe not every time, but when images are larger.

Quick tests:

  1. Try generating smaller dimensions temporarily (512x512 vs 1024x1024+)
  2. Add a Code node before conversion to check base64String.length
  3. Monitor worker memory usage during conversion (htop)

Large Base64 strings can spike memory usage significantly.

That’s all I can say for now. More research is needed.

Yeah I was wondering about a memory issue as well :thinking:

It’s a bit odd because it’s been working fine for the past month and only last night has the issue begun.

I’m hosting n8n on a dedicated sever, 4vcpu 16gb memory 160gb storage, and I currently have 10ish gb of memory free, so I assume n8n has more than enough to deal with

Yeah, with 16GB total and 10GB free, raw memory shouldn’t be the issue. Something environmental definitely changed overnight.

Check these specific things:

  • docker logs n8n-worker | tail -20 - any OOM or unusual errors?
  • df -h - temp files might be filling disk during conversion
  • docker stats - during image processing - see actual memory spikes per container

Since it worked fine for a month and broke suddenly, likely:

  • Docker container memory limits (even if host has plenty)
  • File descriptor limits
  • Disk space issues in ‘/tmp’

It is necessary to investigate