Workflow crashes everytime after a while

Describe the problem/error/question

Hello, I have a workflow that has to make a N * N * N * N query loop (4 nested loops), which is not great if u do it parallelly, so I did it sequentally by using “Loop Over Items” node, the problem is that everytime the workflow just processes for 10 mins and then crash with this message:

This execution failed to be processed too many times and will no longer retry. To allow this execution to complete, please break down your workflow or scale up your workers or adjust your worker settings.

in console I see these logs: (running on queue / 3 workers)

Worker finished execution 479237 (job 1)
Worker started execution 479243 (job 4)
Worker finished execution 479243 (job 4)
Worker started execution 479276 (job 9)
Worker started execution 479283 (job 12)
[ERROR] Failed to parse string as JSON array for key <censured>

Worker finished execution 479283 (job 12)
Worker started execution 479521 (job 14)
Worker finished execution 479276 (job 9)
Worker finished execution 479521 (job 14)
Worker started execution 480095 (job 17)
Worker finished execution 480095 (job 17)
Worker started execution 479247 (job 5)
Worker started execution 480106 (job 21)
Worker finished execution 480106 (job 21)
Size of "nodeExecuteAfter" (8 MB) exceeds max size 5 MB. Trimming...
Size of "nodeExecuteAfter" (8 MB) exceeds max size 5 MB. Trimming...

what it can be? I can’t split the workflow as you can see.

each iteration is small.

is this fixable in some way?

What is the error message (if any)?

This execution failed to be processed too many times and will no longer retry. To allow this execution to complete, please break down your workflow or scale up your workers or adjust your worker settings.

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.92.2 (Self Hosted)
  • Database (default: SQLite): postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: aws os

Same problem

Size of "nodeExecuteBefore" (49 MB) exceeds max size 5 MB. Trimming...
Size of "nodeExecuteAfter" (50 MB) exceeds max size 5 MB. Trimming...

the only thing I’ve found is this node option

  • NODE_OPTIONS=–max-old-space-size=8192

but at certain point, if too much data and iterations, it would crash silently.

Is this fixable someway?