Describe the problem/error/question
We have a workflow that processes a bunch of records 1000 at a time. It takes in 1000 records, writes them to the db, and loops the workflow. Right now in testing it does this about 17 times. (about ~16k records total). We did have some issues originally with this workflow crashing the server when trying to do the entire 16k at once, but breaking it up across 1k at a time seem to fix it. The breaking point seemed to be around 10k records but we broke it down to 1k at a time to give us some headroom.
We have found the workflow runs fine but whenever it goes to save the execution it crashes the server by using up the entire 2GB of memory. Note that the entire workflow has finished. If we turn off saving executions it seems to run fine but takes an addition 30 seconds or so where it seems to be thinking about something. We are running a self-hosted server and we have given it the max memory.
Turning off saving executions is an ok workaround for now but we would like to be able to examine executions, also it gives me concern about the overall stability of the server. Any suggestions for things to look at/configure to resolve this issue?
What is the error message (if any)?
No error message, server just crashes and restarts
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Share the output returned by the last node
Information on your n8n setup
n8n version: 1.3
Database (default: SQLite): Postgres
n8n EXECUTIONS_PROCESS setting (default: own, main): v1(recommended)
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker in Kubernetes
Operating system: Linux
hey @zenweasel, Can you please try setting
DB_LOGGING_MAX_EXECUTION_TIME env variable to
0, or upgrade to n8n version
1.6.1, to see if that reduces the memory usage?
Thanks for the reply. I did both of these things and still have the same issues:
WorkflowOperationError: Workflow did not finish, possible out-of-memory issue
at recoverExecutionDataFromEventLogMessages (/usr/local/lib/node_modules/n8n/dist/eventbus/MessageEventBus/recoverEvents.js:105:37)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at MessageEventBus.initialize (/usr/local/lib/node_modules/n8n/dist/eventbus/MessageEventBus/MessageEventBus.js:113:21)
at Server.configure (/usr/local/lib/node_modules/n8n/dist/Server.js:882:13)
at Server.start (/usr/local/lib/node_modules/n8n/dist/AbstractServer.js:182:9)
at Server.start (/usr/local/lib/node_modules/n8n/dist/Server.js:247:9)
at Start.run (/usr/local/lib/node_modules/n8n/dist/commands/start.js:214:9)
at Start._run (/usr/local/lib/node_modules/n8n/node_modules/@oclif/command/lib/command.js:43:20)
at Config.runCommand (/usr/local/lib/node_modules/n8n/node_modules/@oclif/config/lib/config.js:173:24)
at Main.run (/usr/local/lib/node_modules/n8n/node_modules/@oclif/command/lib/main.js:28:9)
How many cores does the server have?
I have seen similar behaviour with servers with 1cpu or less.
See Workflow on worker taking a lot more memory than on main - #12 by netroy
We tried the previous suggestions (setting
DB_LOGGING_MAX_EXECUTION_TIME and upgrading to 1.6.1) and that did not seem to do anything.
Increasing the cores from 1 to 2 seem to do the trick.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.