Hi N8N Community,
I’m encountering a persistent issue with my N8N workflow where it crashes due to high CPU usage (spiking to 100%) specifically when processing a large text input with an LLM node.
Workflow Overview:
-
Webhook: Receives a trigger (e.g., PDF URL).
-
Download PDF: Fetches the PDF file.
-
LlamaParse: Processes the PDF using LlamaParse (via HTTP Request Node) to extract text content into Markdown format. This step works successfully, even for large documents (e.g., 40 pages).
-
Set Node (Optional): Sometimes used to store the LlamaParse markdown output into a specific JSON field (e.g., OCRFile). This node also completes without issues.
-
LLM Node (Problem Area): This is where the crash occurs. I’m using an LLM node (specifically tested with n8n-nodes-langchain.chainLlm and also the native n8n-nodes-base.openAi node configured for OpenRouter) to process the full text extracted by LlamaParse.
-
Input: The node receives the entire Markdown text (which can be quite large for multi-page documents) either via an expression like {{ $(‘LlamaParse_Output_Node’).item.json.markdown }} or even when the full text is pasted directly into the prompt field for testing.
-
Task: The LLM is prompted to extract structured information (JSON) from the text.
- Subsequent Nodes: (Not reached when the crash occurs).
The Problem:
-
For smaller documents (e.g., 1-20 pages), the workflow runs perfectly.
-
For larger documents (e.g., 40 pages, resulting in a large markdown string, around 30000 tokens), the workflow executes up to the LLM node.
-
As soon as the LLM node starts processing the large text input, the N8N instance’s CPU usage spikes to 100% (verified on monitoring graphs).
-
The execution hangs at the LLM node.
-
Often, the entire N8N process/container crashes shortly after the CPU spike, requiring a restart. It appears to be a crash rather than just a long processing time or API timeout.
What I’ve Tried:
-
Increased Resources: Hosted on Render and Railway, increased instance resources significantly (up to 6 vCPU / 6GB RAM). The crash still occurs.
-
Direct Pasting: Pasting the large text directly into the LLM node’s prompt field (eliminating potential issues with expression handling) still results in a crash.
Hypothesis:
It seems N8N is struggling to handle/process the very large text string within the LLM node’s execution context before the API call is even fully constructed or sent.
Question:
Has anyone else experienced similar instant crashes or extreme CPU spikes when feeding very large text inputs (e.g., full document OCR results) into LLM nodes? Are there known limitations or best practices for handling such large single data items within an N8N node, other than chunking the input beforehand?
env: DB_POSTGRESDB_DATABASE=“postgres”
DB_POSTGRESDB_HOST=“aws-0-ap-southeast-1.pooler.supabase.com”
DB_POSTGRESDB_PORT=“6543”
DB_TYPE=“postgresdb”
ENABLE_ALPINE_PRIVATE_NETWORKING=“false”
EXECUTIONS_DATA_MAX_AGE=“336”
EXECUTIONS_DATA_PRUNE=“true”
EXECUTIONS_DATA_PRUNE_MAX_COUNT=“1000”
EXECUTIONS_MODE=“regular”
EXECUTIONS_PROCESS=“main”
N8N_DEFAULT_BINARY_DATA_MODE=“filesystem”
N8N_EDITOR_BASE_URL=“https:/*******.railway.app”
N8N_ENCRYPTION_KEY=“*******D3a-t"
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=“true”
N8N_LISTEN_ADDRESS=“::”
N8N_REINSTALL_MISSING_PACKAGES=“true”
PORT=“5678”
WEBHOOK_URL="https://.railway.app”
Using latest n8n
Any insights or suggestions would be greatly appreciated!
Thanks,
Rifad