Workflow never stops executing

Describe the problem/error/question

Hello all!

I’m very new to n8n so this might be simple question.

I’ve created a small workflow to push data into qdrant vector database (version 1.12.5).
Everything seems to be running just fine but the workflow never stops.
And eventually it cannot be stopped either. What could be the issue or how should I debug this further?

Everything is self hosted.

What is the error message (if any)?

No error messages are seens on the UI. Just that the execution seems to be running after the whole workflow has been finished.

Information on your n8n setup

  • n8n version: 1.72.1
  • Running n8n via: Docker
  • Database: SQLite
  • Operating system: Ubuntu Linux server 24.04

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey

It seems to work better if I add smaller amount of data to be processed by the workflow. It still seems to take a lot of time after all steps are executed.
I can also see that browser is consuming 100% of CPU after everything is executed. Weird in my opinion.

Hi @timppa,

Welcome to the community! :balloon:

Do you get any error messages in the logs?
When you work with smaller chunks of data, do you see the execution lagging only after all the nodes executed, not during?

This sounds like a memory issue. It likely is lagging the execution and is not allowing it to resolve, even if the UI is running through it.
I would suggest going through this documentation - Memory-related errors | n8n Docs - and adjusting your instance and workflow accordingly. And see if things change afterwards.

Let us know how you get on! :raised_hands:

Hi

Thanks for getting back to me on this issue.

So far no errors in the logs;

~/.n8n $ cat n8nEventLog.log |grep -i Allocation
~/.n8n $ cat n8nEventLog.log |grep -i memo
~/.n8n $ cat n8nEventLog.log |grep -i heap

Am I looking from the right place??

the crash.journal is empty as well.

I have 16Gb of RAM allocated currently to the vm running docker and n8n. It’s using ~50% on peak. I can increase it as well and try with different JavaScript engine memory settings. Although I did not see any errors.

One more thing, would you suggest using postgres or MariaDB instead of SQLite? Not sure if that makes any difference. I have both running in my cluster so I could move that part out of the container.


Br,
Timo

I just modified the configs a bit and added some more RAM into the machine. I enabled debug mode for logging to see a bit more. I will update once I have done some further testing.

Thanks again for the help!


Br,
Timo

Here are the last log entries from the workflow execution;
2025-01-04T00:00:54.819Z | debug | Execution added {“executionId”:“742”,“file”:“active-executions.js”,“function”:“add”}
2025-01-04T00:00:54.821Z | debug | Execution for workflow nextcloud data download was assigned id 742 {“executionId”:“742”,“file”:“workflow-runner.js”,“function”:“runMainProcess”}
2025-01-04T00:00:54.833Z | debug | Execution ID 742 will run executing all nodes. {“executionId”:“742”,“file”:“workflow-runner.js”,“function”:“runManually”}
2025-01-04T00:00:54.834Z | debug | Workflow execution started {“workflowId”:“jbH3Q5M5csTNFeNP”,“file”:“LoggerProxy.js”,“function”:“exports.debug”}
.
.
.
.
2025-01-04T00:03:59.957Z | debug | Workflow execution finished successfully {“workflowId”:“jbH3Q5M5csTNFeNP”,“file”:“LoggerProxy.js”,“function”:“exports.debug”}
2025-01-04T00:03:59.957Z | debug | Executing hook (hookFunctionsSave) {“executionId”:“742”,“workflowId”:“jbH3Q5M5csTNFeNP”,“file”:“workflow-execute-additional-data.js”,“function”:“workflowExecuteAfter”}
2025-01-04T00:03:59.957Z | debug | Save execution data to database for execution ID 742 {“executionId”:“742”,“workflowId”:“jbH3Q5M5csTNFeNP”,“finished”:true,“stoppedAt”:“2025-01-04T00:03:59.956Z”,“file”:“shared-hook-functions.js”,“function”:“updateExistingExecution”}
2025-01-04T00:04:05.931Z | debug | Executing hook (hookFunctionsPush) {“executionId”:“742”,“pushRef”:“fiuousk2sk”,“workflowId”:“jbH3Q5M5csTNFeNP”,“file”:“workflow-execute-additional-data.js”,“function”:“workflowExecuteAfter”}
2025-01-04T00:04:08.911Z | debug | Send data of type “executionFinished” to editor-UI {“dataType”:“executionFinished”,“pushRefs”:“fiuousk2sk”,“file”:“abstract.push.js”,“function”:“sendTo”}
2025-01-04T00:04:10.773Z | debug | Execution finalized {“executionId”:“742”,“file”:“active-executions.js”,“function”:“finalizeExecution”}
2025-01-04T00:04:10.773Z | debug | Execution removed {“executionId”:“742”,“file”:“active-executions.js”}
2025-01-04T00:04:10.790Z [Rudder] debug: no existing flush timer, creating new one
2025-01-04T00:04:16.499Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:04:20.791Z [Rudder] debug: in flush
2025-01-04T00:04:20.791Z [Rudder] debug: cancelling existing flushTimer…
2025-01-04T00:05:16.500Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:06:16.501Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:07:16.503Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:08:16.505Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:09:16.507Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:10:16.508Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:11:16.509Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:11:54.339Z | debug | Soft-deleted executions {“scopes”:[“pruning”],“count”:29,“file”:“pruning.service.js”,“function”:“softDelete”}
2025-01-04T00:11:59.294Z | debug | Hard-deleted executions {“scopes”:[“pruning”],“executionIds”:[“690”,“691”,“692”,“693”,“694”,“695”,“696”,“697”,“698”,“699”,“700”,“701”,“702”,“703”,“704”,“705”,“706”,“707”],“file”:“pruning.service.js”,“function”:“hardDelete”}
2025-01-04T00:11:59.294Z | debug | Hard-deletion in next 15 minutes {“scopes”:[“pruning”],“file”:“pruning.service.js”,“function”:“scheduleNextHardDeletion”}
2025-01-04T00:12:16.511Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:13:16.512Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:14:16.514Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}
2025-01-04T00:15:16.515Z | debug | Querying database for waiting executions {“scopes”:[“waiting-executions”],“file”:“wait-tracker.js”,“function”:“getWaitingExecutions”}

Execution still seems to be running…

Very interesting, I think this is more an UI issue rather than issue of workflow never stopping.

I made some minor changes to the workflow and once data is pushed to the Qdrant, it will move the original files in my nextcloud instance to “processed” folder. Everything works and the workflow completes but the UI shows it is still executing. Also based on the logs the workflow finished already.

Any ideas?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.