There are several issues in the n8n forum about memory issues, so you can’t say: “My N8N is working“ it’s like when you make a paper and the experiments works only for you.
You could have 100,200 or 300k executions, but the point is: what’s the memory usage your workflows are being executing? If you have workflows that use lots of ram it will 100% crash on n8n, cuz it does not detect well the max memory usage per workflow. To resolve this problem the only way is to decentralize the logic from n8n and request it trough a stupid endpoint, that sucks really bad. I also made multi-programming-languange coding nodes (Rust, Go, C, Ruby), which executes the code out of n8n so each node is scalable individually, and that works really well (Right now N8N rebuilds all nodes dynamically each time, this makes no sense)… I am sorry I cant publish it open source because I worked on it while I was working in a company and so they wont let me publish it
But as you can see there are very many thing that can be done, I wonder why this is not the priority.