By default does n8n simply start a workflow if it is asked to. So meaning you could have at any given point between 0 and n. You do not really have any control over that. Then depending on the amount of memory your machine has it would crash at some point if n8n runs out of memory.
With worker processes, you can gain some control. n8n will only run as many workflows in parallel as there are slots on workers available but if they require too much memory, also they would crash.
If you want a formula it would be something like:
[memory n8n main process] + [number of workflows running] x [memory required per workflow execution] < [RAM of worker]
The main process normally takes up around 100 MB.
How much memory a workflow execution requires depends mainly on the amount of data it processes.
Another consideration is how protected you want to be against crashes. If you have for 5 workers with 10 executions each, then each of that 10 could workflows could crash a worker, or the combination of them. If you run only one execution on each, then each execution could only take down itself but no other execution.
You also have to be aware that they share one CPU. So if you have very CPU-intensive workflows then running 50 in parallel is probably not a good idea as they would take forever to run. Additionally, would you have to provide also a lot of RAM to that worker.
So there is sadly no clear simple answer. It really depends on your specific needs. But generally, the fewer you run on a worker in parallel the lower the chance that a worker runs out of memory and crashes. At the same time does obviously also increase the chance that workflow executions get delayed.
Hope that is helpful.