Hey all ![]()
I’m trying to split a long Deepgram transcript into 4 chunks before sending each part to the OpenAI “Message a model” node.
My workflow looks like this:
HTTP Request (Deepgram)
→ Edit Fields (keep only transcript)
→ Code in JavaScript (split into 4 chunks)
→ Loop Over Items (Split in Batches)
→ Message a Model (GPT)
Even though my “Edit Fields” node only outputs { transcript: “…” }, I still get this error in the Code node:
Code generation failed
Your workflow data is too large for AI to process.
Here’s the code I’m using:
const text = $json.transcript;
const desiredChunks = 4;
const overlap = 300;
const chunkSize = Math.ceil(text.length / desiredChunks);
const chunks = ;
for (let i = 0; i < text.length; i += chunkSize - overlap) {
chunks.push({ chunk: text.slice(i, i + chunkSize) });
}
return chunks.map(c => ({ json: c }));
Has anyone managed to split a large Deepgram transcript successfully without hitting this error?
Any suggestions for handling long text safely in n8n?
Thanks!
— Sebastian
Describe the problem/error/question
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Share the output returned by the last node
Information on your n8n setup
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system: