Description of the issue: Hi everyone,
I am experiencing critical performance issues on my n8n Cloud (Starter Plan) instance. The instance is “stuck” on “Connection Lost“, and standard troubleshooting hasn’t resolved it.
The Workflow & Suspected Cause: I have an AI Agent workflow that queries a database and generates Excel reports.
-
Normal Load: ~10 columns x 300 rows (Works fine).
-
The Event: I recently attempted to generate a heavier report (40 columns x 10,000 rows).
-
The Result: I suspect this caused a massive memory spike or a process that is refusing to die. Since that execution, the instance has been unstable.
What I have tried so far (and failed):
-
Restarted the Workspace: I used the dashboard to restart, but the sluggishness persists.
-
Updated Version: I upgraded to the latest stable version (1.121.3) hoping it would clear the cache/processes.
-
Wait Time: I stopped all interactions for 24 hours to let any queues drain, but the issue remains.
-
Support: I have reached out to official support but haven’t heard back yet.
-
No process running: In executions there is no proccess on Running or Queued.
My Configuration (Screenshots attached):
-
Hosting: n8n Cloud (Starter Monthly)
-
Version: 1.121.3
-
Execution Data: I currently have “Save Successful Executions” turned ON (I realize now this might have contributed to bloating the database during the large file generation and now it´s turned OFF).
Question: Since I am on n8n Cloud, I don’t have CLI access to kill specific Node processes or flush the Redis/Postgres buffer manually. Is there a way to perform a “Hard Reset” of the instance memory or clear the execution backlog without waiting for a ticket response? Has anyone else experienced a “zombie” process surviving a workspace restart on Cloud?
Thanks in advance!