Repeated errors followed by repeated multiple unknown status jobs - all failed

Latest version running on DO droplet with docker compose.

I think the following screenshot describes it all

These unknown errors were preceded by a ton of Errors and jobs failed.

All these jobs are simple - wait for a webhook, manipulate some dates or add decimals to amounts and write to a MYSQL DB on the host running the docker n8n.

The errors usually are around Err - readtimeout for Mysql.
There are enough reties and graceful failures built in the workflows

This happened at month end when a bunch of subscriptions renewed at the same time and generated maybe 10 concurrent webhooks and worklfows.

I am surprised n8n cant handle 10 concurrent connections and pretty much dies till I rebooted the box.

What can I do to shore up resiliency?
Are there any config parameters or can I assign more CPU cores or RAM ?

Thanks a lot.

Sorry to hear that you have problems. To however be able to answer your question properly, and not waste unnecessary time, did we create a default template that gets filled in automatically every time an issue gets created. Sadly did you delete it instead of answering the questions. That kind of defeats the purpose and does not allow us to do that. So could you please answer the following questions so that we can help you as well and fast as possible? Thanks!

  • n8n version:
  • Database you’re using (default: SQLite):
  • Running n8n with the execution process [own(default), main]:
  • Running n8n via [Docker, npm,, desktop app]:


  1. Version - latest
  2. SQLite default db
  3. I don’t know what this means
  4. Running docker compose on Ubuntu in Dogital Ocean

Thanks for providing the data.

In this case I expect it is the way the executions run, and that you have it set to the default value own (it not set, it will also use that).
Can you please check if you have in your docker-compose file an environment entry for EXECUTIONS_PROCESS. If not, or if it is not set to main add one and set it to main.

So should then look like this:


This will make sure that it does not start a new process for each workflow execution and that will not just make the executions much faster, it will also reduce the required memory by a lot and so 10 and many more executions in parallel should then be no problem anymore.

You should see the result very fast by for example executing a workflow manually. You will see that instead of having to wait at least 1 second for the workflow to start (and potentially directly finish) will it change to almost instantly.

Thanks Jan,
I am away from the computer now but I used the default docker file so it should be the default value.

So if I change it to main will that spawn a new process for each workflow or is it the other way around?
I read your reply twice but I am not clear which setting does what.

Thanks for the near instant replies.

Thanks @jan,
I checked the docker file and there was no line for EXECUTIONS_PROCESS.
I added one as you suggested.
However after reading the docs at
Execution modes and processes - n8n Documentation

Shouldn’t I rather use own so I can have multiple CPUs going to work instead of overloading one CPU?

It depends what the limiting factor is. To probably more than 98% is it the memory, not the CPU. Also is the worst thing that happens when the CPU becomes the limit, that things are a little bit slower, compared to crashing when the memory runs out. On top does spawning a new process take not just a lot of memory, it also requires quite some time (1 second). So “main” will be the best setting for you. Just give it a try, n8n will be able to process at least 10-20x more in parallel.

We will release some benchmarks soon that will make the performance increase clear.


Thanks Jan.
I did a couple of things.
Changed the setting to main and also increased the RAM - because that seemed to be the limiting factor.
Will see when the next burst comes in and update this thread.
Thanks for your help


This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.