Describe the problem/error/question
The Gemini Image Generation node consistently throws an error when processing multiple items, specifically failing at item index 4 (the 5th item) with:
NodeOperationError: Cannot read properties of undefined (reading 'map')
What is the error message (if any)?
{
“errorMessage”: “Cannot read properties of undefined (reading ‘map’) [item 5]”,
“errorDetails”: {},
“n8nDetails”: {
“nodeName”: “Generate an image”,
“nodeType”: “@n8n/n8n-nodes-langchain.googleGemini”,
“nodeVersion”: 1,
“resource”: “image”,
“operation”: “generate”,
“itemIndex”: 5,
“time”: “9/18/2025, 11:39:09 PM”,
“n8nVersion”: “1.109.2 (Cloud)”,
“binaryDataMode”: “filesystem”,
“stackTrace”: [
“NodeOperationError: Cannot read properties of undefined (reading ‘map’)”,
" at ExecuteContext.router (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_6a42402e1b434941076375196b5319e5/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/GoogleGemini/actions/router.ts:60:10)“,
" at processTicksAndRejections (node:internal/process/task_queues:105:5)”,
" at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_6a42402e1b434941076375196b5319e5/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/GoogleGemini/GoogleGemini.node.ts:15:10)“,
" at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1253:8)”,
" at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1427:11)“,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1727:27”,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2303:11"
]
}
}
Please share your workflow
{
“nodes”: [
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “00221546-0e02-45f3-9c79-dd37507e3715”,
“name”: “id”,
“value”: “={{ $json.id }}”,
“type”: “string”
},
{
“id”: “e720aafe-1a1c-495a-84de-abccfad3c136”,
“name”: “prompt”,
“value”: “={{ $json.text }}”,
“type”: “string”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 3.4,
“position”: [
6992,
-2608
],
“id”: “179841f5-4ab5-449d-b5fc-6390676fa273”,
“name”: “Get Text Segments”
},
{
“parameters”: {
“modelId”: {
“__rl”: true,
“value”: “gpt-5-mini”,
“mode”: “list”,
“cachedResultName”: “GPT-5-MINI”
},
“messages”: {
“values”: [
{
“content”: “=Create an image prompt for this script segment:\n"{{ $json.prompt }}"\n\nReturn JSON in this format:\n{\n “id”: {{ $json.id }},\n “image_prompt”: “”\n}”
}
]
},
“jsonOutput”: true,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“typeVersion”: 1.8,
“position”: [
7152,
-2608
],
“id”: “ebb7ea7c-a915-46af-84a1-3aa81dd33f25”,
“name”: “Create Image Prompts From Segments”,
“credentials”: {
“openAiApi”: {
“id”: “0z4HRp6ko6NNWm2M”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“modelId”: {
“__rl”: true,
“value”: “gpt-5”,
“mode”: “list”,
“cachedResultName”: “GPT-5”
},
“messages”: {
“values”: [
{
“content”: "=You will receive:\n\nWhisper Segments, A JSON array containing transcribed audio segments with ACTUAL timestamps showing when each phrase was spoken: {{ $json.segments }}\nText Blocks, An array of cleaned, presentation-ready text strings derived from those segments: {{ $json.message.content.blocks }}\n\nYour Goal\nMap the text blocks to realistic timings by referencing the Whisper segment timestamps as ground truth for speech pacing, while creating a smooth, proportional distribution.\nCritical Instructions\n1. Use Whisper Data as Your Foundation\n\nThe Whisper segments show EXACTLY how long each phrase took to speak\nFind which segment(s) correspond to each block’s content\nUse those real timestamps as your baseline\nThen adjust to create better distribution across blocks\n\n2. Intelligent Timing Distribution\n\nStart with Whisper reality: If Whisper shows “Investors are gearing up for Tesla’s November 6th annual meeting” took 0-4.48 seconds, that’s your reference\nBut don’t just copy boundaries: If your block is shorter (“Investors are gearing up for Tesla’s November 6 annual meeting,”), estimate it would end around 2.5 seconds, not the full 4.48\nSmooth out the pacing: If segments are choppy, create more natural flow\n\n3. Timing Guidelines\n\nMinimum duration: Every block must be at least 1.5 seconds\nUse Whisper’s actual pacing to inform your decisions:\n\nIf Whisper shows 10 words took 3 seconds, use that ratio\nIf a block contains half the content of a segment, give it roughly half the time\nIf a block combines two segments, sum their durations\n\n\n\n4. Proportional Redistribution\n\nCount words/syllables in blocks vs their corresponding segments\nIf block has 80% of segment’s words, give it ~80% of the time\nEnsure smooth transitions between blocks\nTotal duration MUST match the audio length from Whisper\n\n5. Continuity Requirements\n\nThe end time of each block MUST equal the start time of the next block\nNo gaps or overlaps in the timeline\nThe final block’s end time should match the last segment’s end time\n\nOutput Format\njson[\n {\n “id”: 0,\n “text”: “[exact text from block]”,\n “start”: 0,\n “end”: [calculated using Whisper reference + proportional adjustment]\n },\n {\n “id”: 1,\n “text”: “[exact text from block]”,\n “start”: [previous end time],\n “end”: [calculated using Whisper reference + proportional adjustment]\n }\n]\nExample Approach\nGiven:\n\nWhisper segment: “Investors are gearing up for Tesla’s November 6th annual meeting, where they’ll vote on a new” (0-4.48s)\nBlock 1: “Investors are gearing up for Tesla’s November 6 annual meeting,”\nBlock 2: “where they’ll vote on a new shareholder proposal for Tesla to invest in xAI—”\n\nAnalysis:\n\nBlock 1 is ~60% of the segment content → ~2.5 seconds\nBlock 2 continues into the next segment, check Whisper for “shareholder proposal” timing\nUse Whisper’s pacing but redistribute for better flow\n\nFinal Checklist\n✓ Referenced Whisper timestamps for realistic speech pacing\n✓ Proportionally adjusted based on actual content in each block\n✓ All blocks are at least 1.5 seconds long\n✓ Timings form a continuous sequence\n✓ Total duration matches Whisper’s total audio length\n✓ Distribution feels natural, not just copying segment boundaries.\n\nMake sure there are 8 total blocks "
}
]
},
“jsonOutput”: true,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“typeVersion”: 1.8,
“position”: [
5856,
-2496
],
“id”: “28cc4ba5-c1ba-4ff7-89ff-8d9e2a906621”,
“name”: “Match Blocks with Timestamps”,
“credentials”: {
“openAiApi”: {
“id”: “0z4HRp6ko6NNWm2M”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “9be5f7fd-fe25-42cb-8eed-ef83390988ba”,
“name”: “id”,
“value”: “={{ $json.message.content.id }}”,
“type”: “number”
},
{
“id”: “abedc61c-4a73-467f-8cec-5654cbfeba5d”,
“name”: “prompt”,
“value”: “={{ [ $json.message.content.image_prompt ] }}”,
“type”: “array”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 3.4,
“position”: [
7504,
-2608
],
“id”: “eb36d785-e6b2-4e91-962c-601df57d3016”,
“name”: “Edit Fields11”
},
{
“parameters”: {
“fieldToSplitOut”: “=message.content.blocks”,
“options”: {}
},
“type”: “n8n-nodes-base.splitOut”,
“typeVersion”: 1,
“position”: [
6416,
-2496
],
“id”: “bfba8284-91bb-46cd-b136-162f825bea0a”,
“name”: “Split Out”
},
{
“parameters”: {
“resource”: “image”,
“modelId”: {
“__rl”: true,
“value”: “models/imagen-3.0-generate-002”,
“mode”: “list”,
“cachedResultName”: “models/imagen-3.0-generate-002”
},
“prompt”: “={{ $json.prompt }} image should be 9:16”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.googleGemini”,
“typeVersion”: 1,
“position”: [
8144,
-2624
],
“id”: “cfeb12d5-7956-4415-8097-40b11e54d69d”,
“name”: “Generate an image”,
“credentials”: {
“googlePalmApi”: {
“id”: “ltScf0vk0FoAfpKF”,
“name”: “Google Gemini(PaLM) Api account”
}
}
},
{
“parameters”: {
“jsCode”: “return items.map((item, index) => {\n const prompt = item.json.prompt ?? item.json.image_prompt ?? null;\n\n return {\n json: {\n id: item.json.id ?? index,\n // force everything to be an array of strings\n prompt: Array.isArray(prompt) ? prompt : (prompt ? [prompt] : )\n }\n };\n});”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
7712,
-2608
],
“id”: “9974bbc7-1f5b-4d53-ac14-a36731fe79c5”,
“name”: “Code2”
}
],
“connections”: {
“Get Text Segments”: {
“main”: [
[
{
“node”: “Create Image Prompts From Segments”,
“type”: “main”,
“index”: 0
}
]
]
},
“Create Image Prompts From Segments”: {
“main”: [
[
{
“node”: “Edit Fields11”,
“type”: “main”,
“index”: 0
}
]
]
},
“Match Blocks with Timestamps”: {
“main”: [
[
{
“node”: “Split Out”,
“type”: “main”,
“index”: 0
}
]
]
},
“Edit Fields11”: {
“main”: [
[
{
“node”: “Code2”,
“type”: “main”,
“index”: 0
}
]
]
},
“Split Out”: {
“main”: [
[
{
“node”: “Get Text Segments”,
“type”: “main”,
“index”: 0
}
]
]
},
“Generate an image”: {
“main”: [
]
},
“Code2”: {
“main”: [
[
{
“node”: “Generate an image”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “87038e00a86ecc84a4953697b77d06837e86f175adf54a526e48c42faeff2bdb”
}
}
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Share the output returned by the last node
added it above in error message.
Information on your n8n setup
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system: n8n cloud on new mac