Best way to keep one and only one execution running?

I have a workflow that processes a queue (which is a series of jobs in a SQL table that get marked queued, started, and finished). The items need to be processed one row at a time.

Any suggestions for the best way to have a workflow “always running only one instance” when there are items in the queue? And start running again as soon as items are added to the queue?

I’ve tried:

  1. Starting on a schedule. Problem is, the execution time is variable, so I either have idle time or two running at once.

  2. Starting with a webhook and then calling itself when it finishes. That works while I’m babysitting it, but is definitely not a production solution.

  3. Using a separate workflow to monitor the queue and executions. But the N8N API only lets me filter on “error”, “success”, or “waiting”. There’s no way to query for “running”.

Suggestions?

Lee

P.S. There’s a universe where I scale up the server and adjust things so I can have 3-4 simultaneous executions, but I still want to keep the number of running executions within a range and not swamp the server - and only run them when there are items in the queue, and start them up when new items are added to the queue.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

hello @Lee_S

That sounds like the need for some message broker services (like, rabbitMQ, Amazon MQ, Redis). It’s also possible to create something like a lock in the workflow with workflowStaticData, but that would be more complicated

This client isn’t going to understand something that complicated. The “governance” has to be entirely within N8N.

I ended up just running the job to process the next item in the queue on a schedule that’s -right- on the edge of being too fast. Then I made another job that looks at the output of an “uptime” command and deactivates the workflow if the load goes above 1.2, and reactivates it when the load drops below 0.5. Not ideal, but (hopefully) good enough.

Lee