According to the provided information, I don’t think it’s a problem in the scaling process itself.
It looks like the data volume is too large and n8n might be having issues dealing with it.
Can you perhaps divide this cron into 2 separate workflows, working with 10k lines at a time, just so we can see if n8n works fine for a smaller dataset?
The main problem I can see is that while working with all the data, n8n is actually accumulating information in memory and you may be running out of RAM or n8n is struggling to continue due to memory limitations.
Do you have any memory monitoring in place to see how it is behaving?