Need help here, today my n8n cloud workspace suddenly restarted every 10 mins, i don’t have anything scheduled to run every 10 mins, so not so sure what/which workflow causing the issue.
my cloud-instance : diane
Need help here, today my n8n cloud workspace suddenly restarted every 10 mins, i don’t have anything scheduled to run every 10 mins, so not so sure what/which workflow causing the issue.
my cloud-instance : diane
A restart exactly every 10 minutes almost always means something is actively triggering it, not a random crash. Check Executions → All (include failed/errored) and look at the timestamps — find what ran right before each restart, that’s your culprit. Also search your workflows for any Schedule Trigger set to */10 * * * * or “every 10 minutes”. If you can’t spot the cause, n8n cloud support can check the instance-level logs for the diane workspace directly.
Hi,
No, I don’t have anything scheduled to run every 10 mins.
This is the latest error happened at 4:21pm GMT+8
This is the workflow executions log 1 hour prior to the error
| Diane-AI-Chatbot-Qinet | Success | Mar 16, 16:15:02 | 1.611s | 14387 | |||
|---|---|---|---|---|---|---|---|
| Diane-AI-Chatbot-Qinet | Success | Mar 16, 16:14:45 | 16.683s | 14386 | |||
| Diane-AI-Chatbot-Qinet | Success | Mar 16, 16:10:55 | 956ms | 14385 | |||
| Diane-AI-Chatbot-Qinet | Success | Mar 16, 16:10:50 | 5.211s | 14384 | |||
| Scheduled Newsletter Generation and Email Delivery | Success | Mar 16, 16:00:31 | 1.882s | 14383 | |||
Thanks
Hey @andi.christian looking at your logs the culprit is Diane-AI-Chatbot-Qinet. it’s running very frequently and one of those executions is likely consuming enough memory to crash the instance.
Two things to check:
Open that workflow and look at what’s triggering it, is it a webhook or chat trigger? Something is hitting it repeatedly in short bursts.
Check the execution that happened right around 4:21pm and click into it, look for any node that’s processing a large amount of data or stuck in a loop.
If you can share what trigger that workflow uses and roughly what it does, we can pinpoint the exact cause.
workspace restart loop usually means a worker node is crashing. check n8n logs first — look for any OOM errors or unhandled exceptions in the last 10 mins. if self-hosted, also check if a specific workflow is always running when it crashes. memory leak or infinite loop? also worth checking if you have any very large payloads being processed
It’s triggered by webhook, and no execution done at 4:21pm. that’s the last execution done on my n8n cloud (4:15pm), before the workspace went offline around 4:20-4:21pm.
It’s on n8n cloud. Where can i check the n8n log? I’m not sure which workflow causing the issue too, as sometime it just crashed without anything running on the workflow execution log
Go to your Cloud Admin Dashboard and manually restart the diane instance, this can restore access temporarily, but first i suggest to email [email protected] with your instance name and the exact crash times, they can check the logs directly and identify what’s causing the restarts.
Hi @andi.christian Welcome!
It can be because of memory, make sure that your flows containing AI chat bots or similar kind are turned unpublished, and then restart your instance so that there is no heavy memory load, Also consider upgrading your plan as this usually happens due to less memory ofc.
Also do not consider reaching out to Help channel, try reaching out to the support channel so that they can view if there is some specific problem [email protected] although the problem must be related to hardware limitation so consider upgrading and if the issue persists just reach out to support team.
I noticed something, earlier today (before i realised this issue), the workflow keep sending me n8n workflow error via email (I’ve enabled the option to run a workflow whenever there’s an error in the workflow). I thought something wrong with my error workflow so I turn it off.
But could it be related to this issue? The email comes like 12 times (some with different models) at the same time, with the same Reason, but for different workflow & nodes (But I didn’t see these workflows ran at the time i got this error message, after i checked the execution log)
Error on Diane-AI-Chatbot-UD
Reason: Workflow did not finish, possible out-of-memory issue
Last node: Edit Fields
That error message is the missing piece — “Workflow did not finish, possible out-of-memory issue” across 12 workflows at once explains exactly what happened.
What likely occurred: multiple AI chatbot workflows ran concurrently, each consuming a chunk of memory. When they all hit their limits around the same time, the instance ran out of RAM and crashed. The edit Fields node being the last one suggests those workflows were building up large data objects (conversation history, AI responses) before trying to map/transform them.
Immediate steps:
diane instance to get back onlinediane, the crash time (~4:21pm today), and the error message you received — they can pull the actual memory metrics and confirm which workflow pushed it over the edgeLonger-term fix: In those chatbot workflows, check how much data you’re passing into Edit Fields. If you’re appending full conversation history to an array and feeding it back to the AI node, that array grows with every message and can get very large quickly. Truncating it (keep only last N messages) usually solves this.
The thing is, those error workflow are not the latest error. All were past errors. I checked each log to make sure, and sure enough the record show that the msg sent to chatbot is not from the latest error, but earlier.
I just reactivate the error workflow (sending email) again, and got the barrage of 12 emails saying there’s an error (timestamp: 7.08pm)
However on one of the error, when i checked the id:, the message was sent by user at 5.20pm. And it is shown in the execution log as error (I believe this happened when workflow was crashed when n8n is still running).
I even unpublished one of the other workflow mentioned in the email, but after waiting around 10 minutes, i’m still getting the error for that particular workflow.
So these 12 emails keep repeating the same issue and n8n is sending the error to my Error workflow every 10 mins, it’s always the same 12 execution ID and same error.
yeah that’s the cloud retry mechanism — when executions crash with OOM, n8n cloud queues them for automatic retry every ~10 min. unpublishing the workflow stops new runs but not retries that are already in the queue. those 12 execution IDs are stuck in a retry loop on the server side.
to stop it: email [email protected] with instance name diane and the 12 execution IDs — they can flush the retry queue from their end. there’s no UI option to cancel queued retries. turning off the error workflow in the meantime is fine to avoid the spam while you wait for their response.
Thanks. That explains the email. Will do as per your suggestion
Good luck with support! If they confirm the root cause (which workflow/memory pattern triggered it), feel free to share back here — that kind of real-world cloud OOM case can be useful for others hitting similar symptoms.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.