Weird behavior in cloud n8n - can't check workflow executions!

I have a n8n cloud which I updated few days ago to v0.204,
today I created relatively simple workflow and set 1 hour trigger on it, after couple runs I wanted to check workflow execution log, but I wasn’t been able - every time I open it, after a few seconds I got:
“Connection lost”
then shortly
“Problem loading data”
then shortly
“Request failed with status code 502”
then shortly if I reload page
“503 Service Temporarily Unavailable
nginx”
I tried to revert to last stable version 0.202, but behavior is same - it’s running but I cannot check logs of it.
my workflow n8n.cloud
n8n.cloud

Hi @Yuriy_Klyuch, I am very sorry to hear you’re having trouble.

The description very much suggests your n8n instance ran out of memory when trying to open your execution data. Is there a chance your workflow processes a large amount of data? If so, trying to fetch this data might push your instance’s memory requirements over the available memory. In such cases, you might want to consider re-writing your workflows for example as suggested here.

I know this isn’t a great experience so please accept my apologies for the trouble this causes.

If you’d like us to take a closer look as to why your specific instance becomes unavailable, could you kindly reach out via email to [email protected]? It’d be great if you could use the email you have registered on n8n cloud with when doing so to help us identify you.

Thank you so much!

Thanks for suggestion, it looks interesting. “Large amount of data” wasn’t directly in my workflow, but I think I found the reason - I had disabled and disconnected node that was used in initial dev/tests and it had pinned large amount of data. I unpinned it and after resave and restart, it looks like I can see workflow executions fine now. It seems n8n could be improved here (i.e. pinned data of disabled/disconnected nodes shouldn’t cause that kind of problems).

1 Like

Oh, that sounds like an import problem indeed, thank you so much for reporting this! I’ve blocked some time next week to further test this on my end.

I am very glad to hear it’s sorted for you though, thx for confirming :slight_smile:

So I have tried reproducing this problem on my own n8n cloud instance today, but didn’t manage to reproduce the problem here.

I was mostly using this workflow to do so, with pinning various datasets on both the Webhook and the HTTP Request node, then disconnecting and re-connecting the HTTP Request nodes several times:

However, I could not reproduce a crash when viewing past executions, no matter if they were triggered manually with pinned data or through the Webhook node (both manually and production executions).

Could you perhaps confirm which steps exactly are required to reproduce this error?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.