I’m experiencing a persistent RangeError: Invalid string length
when processing large datasets (~1159 items) in n8n. The workflow executes successfully and processes all data correctly, but fails during the post-execution phase in Push.relayViaPubSub
, causing the execution to be marked as “failed” despite successful completion.
The core issue: The workflow completes successfully (logs show “Worker finished execution successfully”), but then crashes during UI update serialization, marking the entire execution as failed.
Suspected root cause: This appears to be hitting JavaScript’s maximum string length limit (~268MB) when JSON.stringify()
tries to serialize large execution data for frontend updates. The V8 engine has a hardcoded string length limit that cannot be increased through Node.js flags.
What is the error message (if any)?
There was a problem running hook "workflowExecuteAfter" RangeError: Invalid string length
at JSON.stringify (<anonymous>)
at Push.relayViaPubSub (/usr/local/lib/node_modules/n8n/src/push/index.ts:221:56)
at Push.send (/usr/local/lib/node_modules/n8n/src/push/index.ts:169:9)
at ExecutionLifecycleHooks.<anonymous> (/usr/local/lib/node_modules/n8n/src/execution-lifecycle/execution-lifecycle-hooks.ts:149:17)
at ExecutionLifecycleHooks.runHook (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@[email protected][email protected][email protected]_/node_modules/n8n-core/src/execution-engine/execution-lifecycle-hooks.ts:115:28)
Additional errors:
Failed saving execution progress to database for execution ID XX (hookFunctionsSaveProgress, nodeExecuteAfter)
Cannot read properties of undefined (reading 'Node Name')
Please share your workflow
I cannot share the complete workflow due to sensitive API integrations, but the structure is:
- Data Source: HTTP Request returning ~1159 items
- Processing: Multiple transformation nodes (Filter, Set, Function nodes)
- API Integration: Loop through items with HTTP requests to external APIs
- Data Volume: Each item contains substantial JSON data (company details, contracts, etc.)
The workflow processes business data through multiple API calls and transformations, with each item being ~50-100KB of JSON data.
Share the output returned by the last node
The workflow actually succeeds - all data is processed correctly and API calls complete successfully. The error occurs after successful completion during the UI update phase.
Expected behavior: Execution marked as “Success”
Actual behavior: Execution marked as “Failed” due to Push.relayViaPubSub serialization error
Information on your n8n setup
- n8n version: 1.98.2
- Database: PostgreSQL 15
- n8n EXECUTIONS_PROCESS setting: queue (with Redis)
- Running n8n via: Docker Compose
- Operating system: Ubuntu Server
Architecture:
- Main container: UI + Queue management
- 2 Worker containers: Processing only
- Redis: Queue management with 48GB memory
- PostgreSQL: Data persistence
Attempted Configuration Fixes:
bash
# Push Settings
N8N_PUSH_BACKEND=none # Tried both none and websocket
N8N_DISABLE_LIFECYCLE_HOOKS=true # Attempted to disable hooks
N8N_DISABLE_UI=true # On workers only
# Execution Data
EXECUTIONS_DATA_SAVE_ON_SUCCESS=none # Minimizing data storage
EXECUTIONS_DATA_SAVE_ON_PROGRESS=false
EXECUTIONS_DATA_COMPRESS=true
# Memory
NODE_OPTIONS=--max-old-space-size=40960 # Main: 40GB
NODE_OPTIONS=--max-old-space-size=61440 # Workers: 60GB
N8N_PAYLOAD_SIZE_MAX=26843545600
Key Questions
- Is this a known JavaScript V8 engine limitation that cannot be solved through configuration?
- Can the Push.relayViaPubSub serialization be completely bypassed for large executions while preserving execution success status?
- Why do lifecycle hooks still execute despite
N8N_DISABLE_LIFECYCLE_HOOKS=true
? - Is there a way to mark executions as successful even if UI updates fail?
This seems to be a fundamental limitation where actual workflow success is being overshadowed by UI update serialization failures. I’m willing to accept no real-time UI updates if the execution status can be accurate.