Help with random Map Error

Describe the problem/error/question

The Gemini Image Generation node consistently throws an error when processing multiple items, specifically failing at item index 4 (the 5th item) with:

NodeOperationError: Cannot read properties of undefined (reading 'map')

What is the error message (if any)?

{
“errorMessage”: “Cannot read properties of undefined (reading ‘map’) [item 5]”,
“errorDetails”: {},
“n8nDetails”: {
“nodeName”: “Generate an image”,
“nodeType”: “@n8n/n8n-nodes-langchain.googleGemini”,
“nodeVersion”: 1,
“resource”: “image”,
“operation”: “generate”,
“itemIndex”: 5,
“time”: “9/18/2025, 11:39:09 PM”,
“n8nVersion”: “1.109.2 (Cloud)”,
“binaryDataMode”: “filesystem”,
“stackTrace”: [
“NodeOperationError: Cannot read properties of undefined (reading ‘map’)”,
" at ExecuteContext.router (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_6a42402e1b434941076375196b5319e5/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/GoogleGemini/actions/router.ts:60:10)“,
" at processTicksAndRejections (node:internal/process/task_queues:105:5)”,
" at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_6a42402e1b434941076375196b5319e5/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/GoogleGemini/GoogleGemini.node.ts:15:10)“,
" at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1253:8)”,
" at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1427:11)“,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1727:27”,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2303:11"
]
}
}

Please share your workflow

{
“nodes”: [
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “00221546-0e02-45f3-9c79-dd37507e3715”,
“name”: “id”,
“value”: “={{ $json.id }}”,
“type”: “string”
},
{
“id”: “e720aafe-1a1c-495a-84de-abccfad3c136”,
“name”: “prompt”,
“value”: “={{ $json.text }}”,
“type”: “string”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 3.4,
“position”: [
6992,
-2608
],
“id”: “179841f5-4ab5-449d-b5fc-6390676fa273”,
“name”: “Get Text Segments”
},
{
“parameters”: {
“modelId”: {
“__rl”: true,
“value”: “gpt-5-mini”,
“mode”: “list”,
“cachedResultName”: “GPT-5-MINI”
},
“messages”: {
“values”: [
{
“content”: “=Create an image prompt for this script segment:\n"{{ $json.prompt }}"\n\nReturn JSON in this format:\n{\n “id”: {{ $json.id }},\n “image_prompt”: “”\n}”
}
]
},
“jsonOutput”: true,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“typeVersion”: 1.8,
“position”: [
7152,
-2608
],
“id”: “ebb7ea7c-a915-46af-84a1-3aa81dd33f25”,
“name”: “Create Image Prompts From Segments”,
“credentials”: {
“openAiApi”: {
“id”: “0z4HRp6ko6NNWm2M”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“modelId”: {
“__rl”: true,
“value”: “gpt-5”,
“mode”: “list”,
“cachedResultName”: “GPT-5”
},
“messages”: {
“values”: [
{
“content”: "=You will receive:\n\nWhisper Segments, A JSON array containing transcribed audio segments with ACTUAL timestamps showing when each phrase was spoken: {{ $json.segments }}\nText Blocks, An array of cleaned, presentation-ready text strings derived from those segments: {{ $json.message.content.blocks }}\n\nYour Goal\nMap the text blocks to realistic timings by referencing the Whisper segment timestamps as ground truth for speech pacing, while creating a smooth, proportional distribution.\nCritical Instructions\n1. Use Whisper Data as Your Foundation\n\nThe Whisper segments show EXACTLY how long each phrase took to speak\nFind which segment(s) correspond to each block’s content\nUse those real timestamps as your baseline\nThen adjust to create better distribution across blocks\n\n2. Intelligent Timing Distribution\n\nStart with Whisper reality: If Whisper shows “Investors are gearing up for Tesla’s November 6th annual meeting” took 0-4.48 seconds, that’s your reference\nBut don’t just copy boundaries: If your block is shorter (“Investors are gearing up for Tesla’s November 6 annual meeting,”), estimate it would end around 2.5 seconds, not the full 4.48\nSmooth out the pacing: If segments are choppy, create more natural flow\n\n3. Timing Guidelines\n\nMinimum duration: Every block must be at least 1.5 seconds\nUse Whisper’s actual pacing to inform your decisions:\n\nIf Whisper shows 10 words took 3 seconds, use that ratio\nIf a block contains half the content of a segment, give it roughly half the time\nIf a block combines two segments, sum their durations\n\n\n\n4. Proportional Redistribution\n\nCount words/syllables in blocks vs their corresponding segments\nIf block has 80% of segment’s words, give it ~80% of the time\nEnsure smooth transitions between blocks\nTotal duration MUST match the audio length from Whisper\n\n5. Continuity Requirements\n\nThe end time of each block MUST equal the start time of the next block\nNo gaps or overlaps in the timeline\nThe final block’s end time should match the last segment’s end time\n\nOutput Format\njson[\n {\n “id”: 0,\n “text”: “[exact text from block]”,\n “start”: 0,\n “end”: [calculated using Whisper reference + proportional adjustment]\n },\n {\n “id”: 1,\n “text”: “[exact text from block]”,\n “start”: [previous end time],\n “end”: [calculated using Whisper reference + proportional adjustment]\n }\n]\nExample Approach\nGiven:\n\nWhisper segment: “Investors are gearing up for Tesla’s November 6th annual meeting, where they’ll vote on a new” (0-4.48s)\nBlock 1: “Investors are gearing up for Tesla’s November 6 annual meeting,”\nBlock 2: “where they’ll vote on a new shareholder proposal for Tesla to invest in xAI—”\n\nAnalysis:\n\nBlock 1 is ~60% of the segment content → ~2.5 seconds\nBlock 2 continues into the next segment, check Whisper for “shareholder proposal” timing\nUse Whisper’s pacing but redistribute for better flow\n\nFinal Checklist\n✓ Referenced Whisper timestamps for realistic speech pacing\n✓ Proportionally adjusted based on actual content in each block\n✓ All blocks are at least 1.5 seconds long\n✓ Timings form a continuous sequence\n✓ Total duration matches Whisper’s total audio length\n✓ Distribution feels natural, not just copying segment boundaries.\n\nMake sure there are 8 total blocks "
}
]
},
“jsonOutput”: true,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“typeVersion”: 1.8,
“position”: [
5856,
-2496
],
“id”: “28cc4ba5-c1ba-4ff7-89ff-8d9e2a906621”,
“name”: “Match Blocks with Timestamps”,
“credentials”: {
“openAiApi”: {
“id”: “0z4HRp6ko6NNWm2M”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “9be5f7fd-fe25-42cb-8eed-ef83390988ba”,
“name”: “id”,
“value”: “={{ $json.message.content.id }}”,
“type”: “number”
},
{
“id”: “abedc61c-4a73-467f-8cec-5654cbfeba5d”,
“name”: “prompt”,
“value”: “={{ [ $json.message.content.image_prompt ] }}”,
“type”: “array”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 3.4,
“position”: [
7504,
-2608
],
“id”: “eb36d785-e6b2-4e91-962c-601df57d3016”,
“name”: “Edit Fields11”
},
{
“parameters”: {
“fieldToSplitOut”: “=message.content.blocks”,
“options”: {}
},
“type”: “n8n-nodes-base.splitOut”,
“typeVersion”: 1,
“position”: [
6416,
-2496
],
“id”: “bfba8284-91bb-46cd-b136-162f825bea0a”,
“name”: “Split Out”
},
{
“parameters”: {
“resource”: “image”,
“modelId”: {
“__rl”: true,
“value”: “models/imagen-3.0-generate-002”,
“mode”: “list”,
“cachedResultName”: “models/imagen-3.0-generate-002”
},
“prompt”: “={{ $json.prompt }} image should be 9:16”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.googleGemini”,
“typeVersion”: 1,
“position”: [
8144,
-2624
],
“id”: “cfeb12d5-7956-4415-8097-40b11e54d69d”,
“name”: “Generate an image”,
“credentials”: {
“googlePalmApi”: {
“id”: “ltScf0vk0FoAfpKF”,
“name”: “Google Gemini(PaLM) Api account”
}
}
},
{
“parameters”: {
“jsCode”: “return items.map((item, index) => {\n const prompt = item.json.prompt ?? item.json.image_prompt ?? null;\n\n return {\n json: {\n id: item.json.id ?? index,\n // force everything to be an array of strings\n prompt: Array.isArray(prompt) ? prompt : (prompt ? [prompt] : )\n }\n };\n});”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
7712,
-2608
],
“id”: “9974bbc7-1f5b-4d53-ac14-a36731fe79c5”,
“name”: “Code2”
}
],
“connections”: {
“Get Text Segments”: {
“main”: [
[
{
“node”: “Create Image Prompts From Segments”,
“type”: “main”,
“index”: 0
}
]
]
},
“Create Image Prompts From Segments”: {
“main”: [
[
{
“node”: “Edit Fields11”,
“type”: “main”,
“index”: 0
}
]
]
},
“Match Blocks with Timestamps”: {
“main”: [
[
{
“node”: “Split Out”,
“type”: “main”,
“index”: 0
}
]
]
},
“Edit Fields11”: {
“main”: [
[
{
“node”: “Code2”,
“type”: “main”,
“index”: 0
}
]
]
},
“Split Out”: {
“main”: [
[
{
“node”: “Get Text Segments”,
“type”: “main”,
“index”: 0
}
]
]
},
“Generate an image”: {
“main”: [

]
},
“Code2”: {
“main”: [
[
{
“node”: “Generate an image”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “87038e00a86ecc84a4953697b77d06837e86f175adf54a526e48c42faeff2bdb”
}
}

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

added it above in error message.

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system: n8n cloud on new mac

i am not able to able to see your workflow. and by the message i can guess it happening because you are .map runs on array of items and that array is right now is undefined

at this line :
“jsCode”: “return items.map((item, index) => {\n const prompt = item.json.prompt ??

Hey, could you re-post your workflow?

Yeah: { "nodes": [ { "parameters": { "assignments": { "assignments": [ { "id": "00221546-0e02-45f3-9c79-dd37507e3715", "name": "id", "value": "={{ $json.id }}", "type": "string" }, { "id": "e720aafe-1a1c-495a-84de-abccfad3c136", "name": "prompt", "value": "={{ $json.text }}", "type": "string" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 6992, -2608 ], "id": "179841f5-4ab5-449d-b5fc-6390676fa273", "name": "Get Text Segments" }, { "parameters": { "modelId": { "__rl": true, "value": "gpt-5-mini", "mode": "list", "cachedResultName": "GPT-5-MINI" }, "messages": { "values": [ { "content": "=Create an image prompt for this script segment:\n\"{{ $json.prompt }}\"\n\nReturn JSON in this format:\n{\n \"id\": {{ $json.id }},\n \"image_prompt\": \"<description>\"\n}" } ] }, "jsonOutput": true, "options": {} }, "type": "@n8n/n8n-nodes-langchain.openAi", "typeVersion": 1.8, "position": [ 7152, -2608 ], "id": "ebb7ea7c-a915-46af-84a1-3aa81dd33f25", "name": "Create Image Prompts From Segments", "credentials": { "openAiApi": { "id": "0z4HRp6ko6NNWm2M", "name": "OpenAi account" } } }, { "parameters": { "assignments": { "assignments": [ { "id": "9be5f7fd-fe25-42cb-8eed-ef83390988ba", "name": "id", "value": "={{ $json.message.content.id }}", "type": "number" }, { "id": "abedc61c-4a73-467f-8cec-5654cbfeba5d", "name": "prompt", "value": "={{ [ $json.message.content.image_prompt ] }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 7632, -2608 ], "id": "eb36d785-e6b2-4e91-962c-601df57d3016", "name": "Edit Fields11" }, { "parameters": { "mode": "runOnceForEachItem", "jsCode": "// Code node - Set to \"Run Once for Each Item\" mode\n// Handles both array and string formats\n\n// Get the prompt - could be in different formats\nlet promptText = '';\n\n// Check if prompt exists and what format it's in\nif ($json.prompt) {\n if (Array.isArray($json.prompt)) {\n // It's an array, take the first element\n promptText = $json.prompt[0];\n } else if (typeof $json.prompt === 'string') {\n // Check if it's a stringified array like \"[\\\"text\\\"]\"\n if ($json.prompt.startsWith('[')) {\n try {\n const parsed = JSON.parse($json.prompt);\n promptText = Array.isArray(parsed) ? parsed[0] : parsed;\n } catch (e) {\n promptText = $json.prompt;\n }\n } else {\n // It's already a plain string\n promptText = $json.prompt;\n }\n }\n} else {\n // No prompt field, create error message\n promptText = 'Generate an image';\n}\n\nreturn {\n json: {\n id: $json.id,\n promptText: promptText\n }\n};" }, "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ 7840, -2608 ], "id": "fc837c93-9658-4995-bb31-c98a3587c892", "name": "Code3" }, { "parameters": { "options": {} }, "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [ 8048, -2608 ], "id": "e74633c2-5eeb-46ff-bf0c-5d790f5961a6", "name": "Loop Over Items" }, { "parameters": {}, "type": "n8n-nodes-base.noOp", "name": "Replace Me", "typeVersion": 1, "position": [ 8464, -2608 ], "id": "19d1d369-3d1a-4612-8d1e-3e88bf96767e" }, { "parameters": { "resource": "image", "modelId": { "__rl": true, "value": "models/imagen-3.0-generate-002", "mode": "list", "cachedResultName": "models/imagen-3.0-generate-002" }, "options": {} }, "type": "@n8n/n8n-nodes-langchain.googleGemini", "typeVersion": 1, "position": [ 9184, -2592 ], "id": "6e505443-793c-4622-ad11-06b2f95cfc01", "name": "Generate an image", "credentials": { "googlePalmApi": { "id": "ltScf0vk0FoAfpKF", "name": "Google Gemini(PaLM) Api account" } } } ], "connections": { "Get Text Segments": { "main": [ [ { "node": "Create Image Prompts From Segments", "type": "main", "index": 0 } ] ] }, "Create Image Prompts From Segments": { "main": [ [ { "node": "Edit Fields11", "type": "main", "index": 0 } ] ] }, "Edit Fields11": { "main": [ [ { "node": "Code3", "type": "main", "index": 0 } ] ] }, "Code3": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Loop Over Items": { "main": [ [ { "node": "Generate an image", "type": "main", "index": 0 } ], [ { "node": "Replace Me", "type": "main", "index": 0 } ] ] }, "Replace Me": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Generate an image": { "main": [ [] ] } }, "pinData": {}, "meta": { "templateCredsSetupCompleted": true, "instanceId": "87038e00a86ecc84a4953697b77d06837e86f175adf54a526e48c42faeff2bdb" } }

I made a new one that calls the vertex api directly as a substitute but I believe this was the exact state of the previous one that was giving me the issues above.

Hm, doesn’t look like that could have thrown the error, at least not in its current form. You mentioned the Gemini generation node fails on item 4 but I don’t see how you pass multiple items to the generation node and it’s currently set to OpenAI, not Gemini.

Provided you haven’t moved on from that flow already, you could make it represent more closely the logic with handling multiple items during which it failed, because otherwise I don’t know if I’m testing the correct thing. Let me know if you do, would be happy to test.

Thank you. I will likely revisit that workflow it in a few days and will get back to you with the original error causing workflow when I do, as that one may have been adjusted since I changed my process to straight up api calling

I’m back working on this workflow and I’m experiencing the same issue, slightly different as the error is popping up on item [7] instead of item[4]. Here’s the workflow and error message:

Error: { "errorMessage": "Cannot read properties of undefined (reading 'map')", "errorDetails": {}, "n8nDetails": { "nodeName": "Generate an image", "nodeType": "@n8n/n8n-nodes-langchain.googleGemini", "nodeVersion": 1, "resource": "image", "operation": "generate", "itemIndex": 0, "time": "9/25/2025, 11:54:45 AM", "n8nVersion": "1.109.2 (Cloud)", "binaryDataMode": "filesystem", "stackTrace": [ "NodeOperationError: Cannot read properties of undefined (reading 'map')", " at ExecuteContext.router (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_6a42402e1b434941076375196b5319e5/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/GoogleGemini/actions/router.ts:60:10)", " at processTicksAndRejections (node:internal/process/task_queues:105:5)", " at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_6a42402e1b434941076375196b5319e5/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/GoogleGemini/GoogleGemini.node.ts:15:10)", " at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1253:8)", " at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1427:11)", " at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1727:27", " at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2303:11" ] } }

Workflow:

Let me know what you think - thanks!

Let me know if you need the FULL workflow too and I can paste that in

Hey again! Could you try and put a wait node that simply waits for 5 seconds somewhere inside the loop, so we can test if it’s not a symptom of a rate-limit?

Yep, still getting the issue: { "nodes": [ { "parameters": { "assignments": { "assignments": [ { "id": "00221546-0e02-45f3-9c79-dd37507e3715", "name": "id", "value": "={{ $json.id }}", "type": "string" }, { "id": "e720aafe-1a1c-495a-84de-abccfad3c136", "name": "prompt", "value": "={{ $json.text }}", "type": "string" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 6992, -2608 ], "id": "179841f5-4ab5-449d-b5fc-6390676fa273", "name": "Get Text Segments" }, { "parameters": { "modelId": { "__rl": true, "value": "gpt-5-mini", "mode": "list", "cachedResultName": "GPT-5-MINI" }, "messages": { "values": [ { "content": "=Create an image prompt for this script segment:\n\"{{ $json.prompt }}\"\n\nReturn JSON in this format:\n{\n \"id\": {{ $json.id }},\n \"image_prompt\": \"<description>\"\n}" } ] }, "jsonOutput": true, "options": {} }, "type": "@n8n/n8n-nodes-langchain.openAi", "typeVersion": 1.8, "position": [ 7152, -2608 ], "id": "ebb7ea7c-a915-46af-84a1-3aa81dd33f25", "name": "Create Image Prompts From Segments", "credentials": { "openAiApi": { "id": "0z4HRp6ko6NNWm2M", "name": "OpenAi account" } } }, { "parameters": { "assignments": { "assignments": [ { "id": "9be5f7fd-fe25-42cb-8eed-ef83390988ba", "name": "id", "value": "={{ $json.message.content.id }}", "type": "number" }, { "id": "abedc61c-4a73-467f-8cec-5654cbfeba5d", "name": "prompt", "value": "={{ [ $json.message.content.image_prompt ] }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 7632, -2608 ], "id": "eb36d785-e6b2-4e91-962c-601df57d3016", "name": "Edit Fields11" }, { "parameters": { "fieldToSplitOut": "=message.content.blocks", "options": {} }, "type": "n8n-nodes-base.splitOut", "typeVersion": 1, "position": [ 6416, -2496 ], "id": "bfba8284-91bb-46cd-b136-162f825bea0a", "name": "Split Out" }, { "parameters": { "options": {} }, "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [ 8048, -2608 ], "id": "e74633c2-5eeb-46ff-bf0c-5d790f5961a6", "name": "Loop Over Items" }, { "parameters": { "resource": "image", "modelId": { "__rl": true, "value": "models/imagen-3.0-generate-002", "mode": "list", "cachedResultName": "models/imagen-3.0-generate-002" }, "prompt": "={{ $json.prompt[0] }}", "options": {} }, "type": "@n8n/n8n-nodes-langchain.googleGemini", "typeVersion": 1, "position": [ 9184, -2592 ], "id": "6e505443-793c-4622-ad11-06b2f95cfc01", "name": "Generate an image", "credentials": { "googlePalmApi": { "id": "ltScf0vk0FoAfpKF", "name": "Google Gemini(PaLM) Api account" } } }, { "parameters": { "amount": 10 }, "type": "n8n-nodes-base.wait", "typeVersion": 1.1, "position": [ 8608, -2592 ], "id": "cb6d1b9a-6b7c-45b8-b4e7-290088073855", "name": "Wait1", "webhookId": "145316e2-d801-4a92-9c30-81b2ca8e1a40" } ], "connections": { "Get Text Segments": { "main": [ [ { "node": "Create Image Prompts From Segments", "type": "main", "index": 0 } ] ] }, "Create Image Prompts From Segments": { "main": [ [ { "node": "Edit Fields11", "type": "main", "index": 0 } ] ] }, "Edit Fields11": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Split Out": { "main": [ [ { "node": "Get Text Segments", "type": "main", "index": 0 } ] ] }, "Loop Over Items": { "main": [ [], [ { "node": "Wait1", "type": "main", "index": 0 } ] ] }, "Generate an image": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Wait1": { "main": [ [ { "node": "Generate an image", "type": "main", "index": 0 } ] ] } }, "pinData": {}, "meta": { "templateCredsSetupCompleted": true, "instanceId": "87038e00a86ecc84a4953697b77d06837e86f175adf54a526e48c42faeff2bdb" } }

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.