Workflow stopping without errors

Describe the issue/error/question

My n8n workflow is not reliable. From time to time it stops and returns an “unknown” tag in the executions popup. A webhook loads data which is filtered and splitted into batches, however sometimes the flow stops at the “batches” node i.e: sending only 4 of the 5 batches through.

To make the flow efficient, we opted to use a sub-workflow where the data is send to an external platform.

What is the error message (if any)?

Sadly, none.

Please share the workflow

Share the output returned by the last node

There should not be any output, the flow can stop after all the batches have been send through to the sub-workflow.

Information on your n8n setup

  • n8n version: Running version [email protected]
  • Database you’re using (default: SQLite): None, we load in excel files.
  • Running n8n with the execution process [own(default), main]:
  • Running n8n via [Docker, npm,, desktop app]: Cloud

Example of the error:

Hi @Maarten_Bruyninx, I am sorry to hear you’re having trouble. An “unknown” execution status usually suggests that your n8n instance crashed during the workflow execution.

I couldn’t find an n8n cloud instance with your forum email unfortunately, so couldn’t take a look at the actual logs for your instance. However, the most common cause for such crashes would be an out of memory situation.

These are especially hard to debug on n8n cloud as there are no user-facing server logs or an indicator for the memory consumption. Could you perhaps try moving the Function node inside your sub-workflow called by the Execute Workflow node and make sure your sub-workflow only returns an empty (or very small) item?

This should reduce the memory consumption quite a bit, perhaps enough to avoid the situation.

The cloud instance email is [redacted],
The last node in the subworkflow is a “set” that just returns “success” (5 times, hence the 20 items → 4 runs)

Hi @Maarten_Bruyninx, thanks for confirming! This does indeed seem to be an out of memory situation:

Based on your response there might not be much more options we can try apart from moving the Function node inside your sub-workflow (so that it wouldn’t keep >5K items in memory but only 20 for the duration of your sub-workflow execution) I am afraid.

If there is no option to reduce the input file size you might want to test a larger n8n cloud plan (larger instances would have more memory available) or consider self-hosting n8n (so you can assign as much memory as needed).

@MutedJam, could you elaborate further on the heap limit? do the heap & stack get cleared each execution? And I assume this heap is shared, as in: if I run 4 workflows on my cloud, it’s that one single heap that has it’s limit.

So n8n would keep all data in memory during a workflow execution.

Meaning if the Function node sits in your “parent” workflow, it would keep all ~6K items in memory for the entire duration of the parent workflow execution. If the Function node sits in the “child” workflow called form the parent, the memory would become available again after the child workflow execution has finished.

So I should do webhook → split in batches → send to new sub flow (where I would make objects, filter and send to workflow 6)?

Yes, I’d say it’s worth a shot. I usually suggest a longer list of options (like this), but tbf your workflow looks pretty clean already.

Thanks for the fast responses, I will work on a fix after my lunch break.
Have a nice day!

1 Like

Thanks, you too!

And I am really sorry for the trouble, I know these errors without error messages are incredibly frustrating :frowning_face:

1 Like

@MutedJam could you give me some more detail about the current plan we are using?

  • how much ram did the workflow use?
  • how much ram is available in the upgraded packages

We are currently looking at upgrading the plan, but we can not find how much ram is available in each of them.


Hi @Maarten_Bruyninx, I have asked about this internally in the past but unfortunately we don’t provide the exact amount of RAM for each instance size :frowning:. Personally I hope this will be handled in a more dynamic fashion at some point (to allow short spikes), but we’re not quite there yet.

That said, even with a known amount of RAM you might not now how much RAM exactly your workflow execution requires. So my suggestion here would be to just test the larger plans:

  1. sign up for a free trial of a larger plan
  2. copy your workflows
  3. verify whether they run
  4. cancel the trial again

@MutedJam Hi Tom, sorry for writing here again, but it seems a bit crazy to open a new topic.
We have moved our cloud to [email protected] With the upgraded plan everything is running smoothly.

The new issue is the authentication, well I say new but it has always been a bit iffy.
As you know we use a webhook to start the workflow, and sometimes the access token get refreshed and sometimes it does not.

Our fix right now is to manually click the green “reconnect” button in the credentials tab, but this is not a good “production level” solution. Any idea’s?

Thanks in advance.

Hi @Maarten_Bruyninx, glad to hear the larger instance works for you.

Your new questions sounds like a very different issue though. Which service are you authenticating with and which n8n credential type are you using? How often do you need to re-connect?

We use Zoho, but not the CRM that n8n has. So we just use plain old OAuth 2.0

Are you explicitly setting the access_type=offline query parameter in your credential settings? Last time I tried out a Zoho API, this appears to have been the key, here are the exact settings I used back then: Zoho Inventory Oauth2 404 Error - #6 by MutedJam


Hm, I am afraid I don’t know what’s required to get a renewable access token for the API from from your screenshot. Do you have maybe a link to the documentation?


Thanks @Maarten_Bruyninx. I’ll test a bit with this particular API and my end and see what I can find. Just to make sure, did you select “Server-based” when registering your app with Zoho?

Going forward, could you make sure you open separate threads for different issues? Otherwise such questions might get easily overlooked.

Thanks so much!