{
“name”: “My workflow 2”,
“nodes”: [
{
“parameters”: {},
“type”: “n8n-nodes-base.manualTrigger”,
“typeVersion”: 1,
“position”: [
0,
0
],
“id”: “decd683c-6665-4251-b3d6-f9697d47c368”,
“name”: “When clicking ‘Execute workflow’”
},
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “7a24141d-be67-49a6-a360-4b56ca56856e”,
“name”: “gdrive”,
“value”: “https://drive.google.com/file/d/1e6f5VU0L8mFZRs2Xj9t-jpcQIqThbM76/view?usp=sharing”,
“type”: “string”
},
{
“id”: “ea001913-a73a-43f7-bb22-32ba002b391f”,
“name”: “how many videos”,
“value”: “3”,
“type”: “string”
},
{
“id”: “d5db66f1-6304-47ea-b855-1be18fb66fc9”,
“name”: “dialogue”,
“value”: “I used to wake up every day and pop 4 ibuprofen for my knee pain. Now, all I do is wear this knee brac and my pain goes away.”,
“type”: “string”
},
{
“id”: “eb6e4ba1-e3e6-4d63-bc39-31b1c1b8a6bb”,
“name”: “model”,
“value”: “veo3_fast”,
“type”: “string”
},
{
“id”: “7de784fa-78bd-4538-9d57-d2a13b0529cd”,
“name”: “aspect_ratio”,
“value”: “vertical”,
“type”: “string”
},
{
“id”: “1a86ac98-7388-47f0-817c-a6c88f6fc748”,
“name”: “any special requests”,
“value”: “=For this run - I want normal and casual looking people.\n\nI want the actors in the video to be women between the ages of 55 and 65\n\nHave diversity in the actors’ race.”,
“type”: “string”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 3.4,
“position”: [
220,
0
],
“id”: “96b785fe-90b8-4a92-a2a8-7c63e364644f”,
“name”: “Edit Fields”
},
{
“parameters”: {
“resource”: “image”,
“operation”: “analyze”,
“modelId”: {
“_rl": true,
“value”: “chatgpt-4o-latest”,
“mode”: “list”,
“cachedResultName”: “CHATGPT-4O-LATEST”
},
“text”: “Return the analysis YAML format with the following fields:\n\nbrand_name: TitanBrace\n\ncolor_scheme: hex (hex code of each prominent color used)\n\nname: (Descriptive name of the color)\n\nfont_style: (describe the font family or style used: serif/sans-serif, bold/thin, etc.)\n\nvisual_description: (A full sentence or two summarizing what is seen in the image, ignoring the background)\n\nOnly return the YAML. Do not explain or add any other comments.\n”,
“imageUrls”: "=https://drive.google.com/uc?export=download&id={{ $(‘Edit Fields’).item.json[‘gdrive’].match(/https:\/\/drive\.google\.com\/file\/d\/([A-Za-z0-9-]+)/)?.[1]}}”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“typeVersion”: 1.8,
“position”: [
440,
0
],
“id”: “14da1807-b3e8-4ad9-94db-c4629dd7f255”,
“name”: “Analyze image”,
“credentials”: {
“openAiApi”: {
“id”: “jLQ2Yxj2ki8KBTlB”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“promptType”: “define”,
“text”: “=Your task: Create image and video prompts as guided by your system guidelines\n\nMake sure that the reference image is depicted as accurately as possible in the resulting images, especially all text\n\n***\n\nCount of videos to create: {{ $(‘Edit Fields’).item.json[‘how many videos’] }}\n\n***\n\nDescription of the reference image: {{ $json.content}}\n\n***\n\nThe user’s preferred aspect ratio: {{ $(‘Edit Fields’).item.json[‘aspect_ratio’] }}\n\n***\n\nThe user’s preferred model: {{ $(‘Edit Fields’).item.json.model }}\n\n***\n\nThe user’s preferred dialogue script: {{ $(‘Edit Fields’).item.json[‘dialogue’] }}\n\n***\n\nOther special requests from the user: {{ $(‘Edit Fields’).item.json[‘any special requests’] }}\n\n***\n\nUse the think tool to double check your output”,
“options”: {
“systemMessage”: "=system_prompt: |\n\n## System Prompt: UGC style VEO3/VEO3_fast prompt generator\n\n You are a UGC (user generated content) AI agent\n\n Your task: Take the reference image or the product in the reference image and place it into realistic, casual scenes as if captured by everyday content creators or influencers.\n\n All outputs must feel natural, candid, and unpolished - avoiding professional or overly staged looks. This means:\n\n - Everyday realism, with authentic, relatable settings\n - Amateur-quality iphone photo/video style\n - Slightly imperfect framing and lighting\n - Candid poses and genuine expressions\n - Visible imperfections (blemishes, messy hair, uneven skin, etc)\n - Real-world environments left as is (clutter, busy backgrounds)\n\nWe need these videos to look natural and real. So in the prompts, have the camera parameter always use keywords like these: unremarkable amateur iphone photos, reddit image, snapchat video, casual iphone selfie, slightly uneven framing, authentic share, slightly blurry, amateur quality phone photo.\n\nIf the dialogue is not provided by the user or you are explicitly asked to create it, generate a casual, conversational line under 200 characters, as if a person were speaking naturally to a friend while talking about the product. Avoid overly-formal or sales like language. The tone should feel authentic, spontaneous, and relatable, matching the UGC style. For example: So TikTok made me buy this…and it turns out it’s the best tasting fruit beer in Sydney? And they donate their profits to charity! And you know what it’s honestly really good!\n\nFor the dialogue, use … to indicate pauses, and avoid special characters like em dashes or hyphens\n\nIMPORTANT: Do NOT use double quotes in the image or video scenes**. Never output more or fewer scenes than requested\n\n\nA - Ask:\n Generate image and video generation instructions for AI image and video generation models based on the user’s request, ensuring exact YAML format for both image and video prompts. Infer aspect ratios from vertical/horizontal context; default to vertical if unspecified\n\nScene Count Rule: Read the user’s requested number of video (an explicit integer) and output exactly that many scenes. If the user does not specify a number, default to 1 scene. Never output fewer or more scenes than requested.\n\n\n\nG - Guidance:\n - Always follow UGC - style casual realism principles listed above\n - Ensure diveristy in race, ethnicity, and hair color when applicable. Default to factors in 55 to 65+ year olds unless specificied otherwise. \n - Use provided scene list when available.\n - Avoid double quotes in the image and video prompts\n\n\nE - Example:\n\n good_examples:\n\n - |\n {\n "scenes": [\n {\n "image_prompt": "action: Character is sitting in the driver’s seat, smiling openly while looking at the camera\ncharacter: Mid-50s, grey haired female with long straight hair, wearing a plain light colored t-shirt, natural facial features with minimal makeup\nsetting: Inside a parked car during daytime, sunlight streaming in from the side window, visible backseats and partial view of the road outside\ncamera: Casual iphone selfie, slightly uneven framing, natural lighting with mild overexposure on one side of the face from direct sunlight\nstyle: Very casual and candid, unposed, authentic expression with friendly and cheerful emotion"\n\n "video_prompt": "dialogue: \"So TikTok made me buy this… and it turns out it’s the best tasting fruit beer in Sydney. And they donate their profits to charity! And you know what, it’s honestly really good!\"\naction: Character sits in the driver’s seat of a parked car, holding the beer can close to the camera while speaking in the dialogue with a casual, friendly tone\ncamera: Amateur-quality iphone video, natural daylight coming through the side window, steady framing from a handheld position\nemotion: Very happy and energetic, genuine enthusiasm and friendliness while talking about the beer",\n "aspect_ratio_video": "9:16",\n "aspect_ratio_image": "2:3" ,\n "model": "VEO3"\n}\n]\n}\n\n\nN - Notation:\n - Final output is a ‘scenes’ array at the root level.\n - The array must contain exactly ‘scene_count’ objects, where ‘scene_count’ is the user specified number (or 1 if unspecified)\n - Each scene contains:\n - ‘image_prompt’ → stringified YAML with: emotion, action, character, setting, camera, style\n - ‘video_prompt’ → stringified YAML with: dialogue, emotion, voice_type, action, character, setting, camera\n - ‘aspect_ratio_video’ → "9:16" or "16:9" (default vertical → "9:16")\n - ‘aspect_ratio_image’ → "3:2" or "2:3" (default vertical → "2:3")\n - ‘model’ → "VEO3" or "VEO3_fast"\n\n\n\nT - Tools:\n - Think Tool: Double check output for completeness, diversity, adherence to style, so that the number of scenes exactly matches the requested count.\n\n\n\n "
}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 2,
“position”: [
660,
0
],
“id”: “b2503ba4-bb2e-4a62-affa-26e770c8b192”,
“name”: “AI Agent”
},
{
“parameters”: {
“model”: {
“__rl”: true,
“value”: “gpt-4.1”,
“mode”: “list”,
“cachedResultName”: “gpt-4.1”
},
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1.2,
“position”: [
680,
220
],
“id”: “32ae482a-a912-4778-98de-b79b49a37f5b”,
“name”: “OpenAI Chat Model”,
“credentials”: {
“openAiApi”: {
“id”: “jLQ2Yxj2ki8KBTlB”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {},
“type”: “@n8n/n8n-nodes-langchain.toolThink”,
“typeVersion”: 1,
“position”: [
800,
220
],
“id”: “17d6adf3-2759-4760-81c9-4507651a1b07”,
“name”: “Think”
},
{
“parameters”: {
“jsCode”: “const jsonData = {\n "scenes": [\n {\n "image_prompt": "emotion: [string]\naction: [string]\ncharacter: [string]\nsetting: [string]\ncamera: [string]\nstyle: [string]",\n "video_prompt": "dialogue: [string]\nemotion: [string]\nvoice_typle: [string]\naction: [string]\ncharacter: [string]\nsetting: [string]\ncamera: [string]",\n "aspect_ratio_video": "[9:16 or 16:9]",\n "aspect_ratio_image": "[3:2 or 2:3]",\n "model": "[VEO3 or VEO3_fast]"\n }\n ]\n};\n\n// Just return the string directly\nreturn JSON.stringify(jsonData);”,
“specifyInputSchema”: true,
“jsonSchemaExample”: “{\n "scenes": [\n {\n "image_prompt": "emotion: [string]\naction: [string]\ncharacter: [string]\nsetting: [string]\ncamera: [string]\nstyle: [string]",\n "video_prompt": "dialogue: [string]\nemotion: [string]\nvoice_typle: [string]\naction: [string]\ncharacter: [string]\nsetting: [string]\ncamera: [string]",\n "aspect_ratio_video": "[9:16 or 16:9]",\n "aspect_ratio_image": "[3:2 or 2:3]",\n "model": "[VEO3 or VEO3_fast]"\n }\n ]\n}”
},
“type”: “@n8n/n8n-nodes-langchain.toolCode”,
“typeVersion”: 1.3,
“position”: [
920,
220
],
“id”: “25258f7c-cd00-4212-8238-36df0ad89e06”,
“name”: “Code Tool”
}
],
“pinData”: {},
“connections”: {
“When clicking ‘Execute workflow’”: {
“main”: [
[
{
“node”: “Edit Fields”,
“type”: “main”,
“index”: 0
}
]
]
},
“Edit Fields”: {
“main”: [
[
{
“node”: “Analyze image”,
“type”: “main”,
“index”: 0
}
]
]
},
“Analyze image”: {
“main”: [
[
{
“node”: “AI Agent”,
“type”: “main”,
“index”: 0
}
]
]
},
“OpenAI Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Think”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“AI Agent”: {
“main”: [
]
},
“Code Tool”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
}
},
“active”: false,
“settings”: {
“executionOrder”: “v1”
},
“versionId”: “dea901cf-696e-4e95-9b10-7cfd2398f639”,
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “6938850395f6682fc0ade20aeaa1800ca6b26623834269d253adb3d6607f3ac6”
},
“id”: “nu8Tf6xvc3jZFGaN”,
“tags”:
}
I am having output issues with my workflow. The code tool is supposed to output the scene for an AI UGC ad back to my AI agent. However, the way it is formatted when it is output to the AI agent is incorrect The input for the code tool needs to be the same as the output sent to the AI agent I am embedding my workflow below. Please help! For reference, I am building this workflow from youtube:https://www.youtube.com/watch?v=8ApvS7nE5kQ&t=73s