Hi everyone,
I’m currently running into a critical stability issue with our n8n Cloud instance and wanted to see if anyone has encountered this or has a workaround.
Describe the problem/error/question
When I attempt to fetch single execution details via the n8n API, the entire cloud instance crashes and restarts if the execution data is too large. This is an example request:
curl --request GET --url ‘<base_url>/api/v1/executions/32767?includeData=true’
I recently encountered this when trying to fetch execution details that were approximately 109MB in size. As soon as the request hits the instance, the service goes down. This causes any workflows that may initiate such API requests to stop execution.
This makes building automated reporting or cleanup workflows around large execution data extremely risky. Ideally, the n8n API should handle these requests gracefully - for instance, by returning a 413 Payload Too Large error or a 500 error rather than triggering a full process restart that disrupts the entire instance for all other users.
Have you already experienced this problem, and do you have any workarounds for it?
What is the error message (if any)?
The n8n cloud instance is not reachable for a short while and restarts.
Information on your n8n setup
-
core
- n8nVersion: 2.9.3
- platform: docker (cloud)
- nodeJsVersion: 24.13.1
- nodeEnv: production
- database: sqlite
- executionMode: regular
- concurrency: 20
- license: enterprise (sandbox)
-
storage
- success: all
- error: all
- progress: false
- manual: true
- binaryMode: filesystem
-
pruning
- enabled: true
- maxAge: 720 hours
- maxCount: 25000 executions