Why is my value "undefined"? It's all in a single-line flow

My property somehow is stated as “undefined”. But you can see on the left corner it does exist.

The whole flow is a single line:

What makes it impossible to be retrieved? How do I work around it?

Here is my workflow JSON:

{
“nodes”: [
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.embeddingsOpenAi”,
“typeVersion”: 1.2,
“position”: [
-2380,
880
],
“id”: “42364de5-126c-4415-b130-cf829cbffa9e”,
“name”: “Embeddings OpenAI2”,
“credentials”: {
“openAiApi”: {
“id”: “he7mc0Z8YQRxDH1e”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“model”: {
“__rl”: true,
“mode”: “list”,
“value”: “gpt-4o-mini”
},
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1.2,
“position”: [
-2340,
580
],
“id”: “4a7e0752-fde0-408b-a315-ed67ca0880b4”,
“name”: “OpenAI Chat Model1”,
“credentials”: {
“openAiApi”: {
“id”: “he7mc0Z8YQRxDH1e”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“modelId”: {
“__rl”: true,
“value”: “gpt-4o-mini”,
“mode”: “list”,
“cachedResultName”: “GPT-4O-MINI”
},
“messages”: {
“values”: [
{
“content”: “=You are Javan Zhang’s AI clone - 19yo entrepreneur specializing in systemized scaling. Use n8n outputs for all responses.\n\nCore Protocol:\n1. Response Style: \n - Lead with signature phrase (Yessir!/Scaling with systems baby)\n - 1 personal anecdote per interaction \n - Respond Conversationally and ensure most relevant response\n\n2. Value Stack:\n • Automation-first solutions\n • Bottleneck identification \n • Growth mindset triggers\n\n3. Phrase Matrix: \n High Frequency: "Let me break it down", "This changes everything…" \n Moderate: "Data shows…", "Prototype this tonight"\n\nEfficiency Rules:\n→ If query > 15 words: "Before we dive deep - crystal clear on your goal?"\n→ Code responses as [Action]: [Reason] format\n→ Never exceed 3 consecutive sentences\n”,
“role”: “system”
},
{
“content”: “=Your goal is to get all the data that is being provided and tailor/customize it specifically around the user query so it’s 100% relevant. Respond as Javan Zhang would\n\nThe response should be one single blend of information as a coach would put it”
},
{
“content”: “=User Query: {{ $(‘Body Text’).item.json.text }}\n\nData:\n\n{{ $json.takeaways }}”
}
]
},
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“typeVersion”: 1.8,
“position”: [
-1520,
340
],
“id”: “90752b42-024b-4d0d-8832-290433f6ba23”,
“name”: “OpenAI1”,
“credentials”: {
“openAiApi”: {
“id”: “he7mc0Z8YQRxDH1e”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“jsonSchemaExample”: “[{\n "videoTitle": "Building Coaching Programs That Scale",\n "link": "https://youtu.be/abc123\”,\n "timeStamp": "[03:28:50 - 03:35:53]",\n "takeaways": "Structure programs around client pain points, price based on outcomes, and validate demand through pre-launch surveys."\n}]"
},
“type”: “@n8n/n8n-nodes-langchain.outputParserStructured”,
“typeVersion”: 1.2,
“position”: [
-1960,
560
],
“id”: “216fb4e8-3b43-42e0-b5f8-4432b5f3b2c7”,
“name”: “Structured Output Parser”
},
{
“parameters”: {
“promptType”: “define”,
“text”: “=Query: {{ $json.text }}”,
“hasOutputParser”: true,
“options”: {
“systemMessage”: "=AI assistant retrieving and structuring YouTube transcript snippets with timestamped links",\n Find relevant transcript snippets matching query parameters Combine segments within 40s overlap/adjacency",\n Merge semantically related content\n Extend timestamp start -5-15s/end +5-15s (within video bounds)\nsummarize\ntakeaway_requirements\n Specific Actionable steps (35-60 words)",\n Clear value proposition\n Omit descriptive fluff\n Concise actionable summary\nMaximum token efficiency response\n timeStamp formatting: "[03:28:50 - 03:35:53]",\n\n "
}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.7,
“position”: [
-2280,
340
],
“id”: “01f405ff-47f4-49a1-8c44-9643f5f59262”,
“name”: “Deepseeked De-Jsoned”
},
{
“parameters”: {
“jsCode”: “// Extract and process input items\nconst items = $input.all();\n\nconst groupedVideos = {};\n\n// Process each video entry\nitems.forEach(item => {\n item.json.output.forEach(video => {\n // Extract the first timestamp (HH:MM:SS)\n const match = video.timeStamp.match(/\[(\d{2}):(\d{2}):(\d{2})/);\n if (!match) return; // Skip if no valid timestamp\n\n // Convert timestamp to total seconds\n const hours = parseInt(match[1], 10);\n const minutes = parseInt(match[2], 10);\n const seconds = parseInt(match[3], 10);\n const startSeconds = hours * 3600 + minutes * 60 + seconds;\n\n // Identify video group by unique key (title + base link)\n const videoKey = ${video.videoTitle}_${video.link.split('&t=')[0]};\n\n if (!groupedVideos[videoKey]) {\n groupedVideos[videoKey] = ;\n }\n\n // Store video data with computed timestamp\n groupedVideos[videoKey].push({\n …video,\n startSeconds\n });\n });\n});\n\n// Function to merge timestamps and keep all takeaways\nconst mergeTimeRanges = (videos) => {\n videos.sort((a, b) => a.startSeconds - b.startSeconds);\n\n const mergedVideos = ;\n let currentMerge = { …videos[0], takeaways: [videos[0].takeaways] };\n\n for (let i = 1; i < videos.length; i++) {\n const currentVideo = videos[i];\n\n // Extract end time of the current merged segment\n const endTimeMatch = currentMerge.timeStamp.match(/\[(\d{2}):(\d{2}):(\d{2}) - (\d{2}):(\d{2}):(\d{2})]/);\n let endSeconds = currentMerge.startSeconds;\n \n if (endTimeMatch) {\n const endHours = parseInt(endTimeMatch[4], 10);\n const endMinutes = parseInt(endTimeMatch[5], 10);\n const endSecs = parseInt(endTimeMatch[6], 10);\n endSeconds = endHours * 3600 + endMinutes * 60 + endSecs;\n }\n\n // If the next segment is within 6 minutes, merge it\n if (currentVideo.startSeconds - endSeconds <= 360) {\n const newEndTime = currentVideo.timeStamp.match(/ - (\d{2}):(\d{2}):(\d{2})]/);\n if (newEndTime) {\n currentMerge.timeStamp = currentMerge.timeStamp.replace(/ - \d{2}:\d{2}:\d{2}/, - ${newEndTime[1]}:${newEndTime[2]}:${newEndTime[3]});\n }\n // Append takeaways instead of replacing\n currentMerge.takeaways.push(currentVideo.takeaways);\n } else {\n // Push the completed merge and start a new one\n currentMerge.takeaways = currentMerge.takeaways.join(" "); // Convert array to string\n mergedVideos.push(currentMerge);\n currentMerge = { …currentVideo, takeaways: [currentVideo.takeaways] };\n }\n }\n\n currentMerge.takeaways = currentMerge.takeaways.join(" ");\n mergedVideos.push(currentMerge);\n return mergedVideos;\n};\n\n// Apply merging logic\nconst finalVideos = Object.values(groupedVideos)\n .map(mergeTimeRanges)\n .flat()\n .map(video => {\n // Update link with new start time\n const correctedLink = video.link.replace(/(\?|&)t=\d+s?/, ‘’) + &t=${video.startSeconds}s;\n\n return {\n json: {\n videoTitle: video.videoTitle,\n link: correctedLink,\n timeStamp: video.timeStamp,\n takeaways: video.takeaways\n }\n };\n });\n\n// Return the final structured output\nreturn finalVideos;\n”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
-1900,
340
],
“id”: “b8359ad8-562a-435a-95c3-bc60f4c6e143”,
“name”: “Link-inator”
},
{
“parameters”: {
“fieldsToAggregate”: {
“fieldToAggregate”: [
{
“fieldToAggregate”: “takeaways”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.aggregate”,
“typeVersion”: 1,
“position”: [
-1700,
340
],
“id”: “e85cea97-c2e1-4e77-9be6-d70c50a3721d”,
“name”: “Aggregate”,
“alwaysOutputData”: false
},
{
“parameters”: {
“jsCode”: “// Get all input items\nconst items = $input.all();\n\n// Generate formatted text array\nconst formattedText = items.map(item => {\n return \"${item.json.videoTitle}\" (${item.json.link}) ${item.json.timeStamp};\n}).join(‘\n\n’);\n\n// Extract takeaways into separate array\nconst takeaways = items.map(item => item.json.takeaways);\n\n// Return both formatted text and takeaways as output\nreturn [{ \n json: { \n formattedText,\n takeaways \n } \n}];”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
-1800,
340
],
“id”: “4d2e758d-2180-49bb-82b8-b521f54a75a7”,
“name”: “Neat-inator”
},
{
“parameters”: {
“updates”: [
“message”
],
“additionalFields”: {}
},
“type”: “n8n-nodes-base.telegramTrigger”,
“typeVersion”: 1.1,
“position”: [
-2960,
360
],
“id”: “2a46f371-47bc-4a8c-aac4-1719dd787411”,
“name”: “Telegram Trigger”,
“webhookId”: “06e60453-4ba8-4e51-8666-783b5f4fbeee”,
“credentials”: {
“telegramApi”: {
“id”: “HoA9ebkoEpueVARl”,
“name”: “Tester”
}
}
},
{
“parameters”: {
“rules”: {
“values”: [
{
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “strict”,
“version”: 2
},
“conditions”: [
{
“id”: “fb66da42-851e-49f8-90eb-a2ec60eb17f5”,
“leftValue”: “={{ $json.message.text }}”,
“rightValue”: “”,
“operator”: {
“type”: “string”,
“operation”: “exists”,
“singleValue”: true
}
}
],
“combinator”: “and”
},
“renameOutput”: true,
“outputKey”: “Text”
},
{
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “strict”,
“version”: 2
},
“conditions”: [
{
“leftValue”: “={{ $json.message.text }}”,
“rightValue”: “”,
“operator”: {
“type”: “string”,
“operation”: “notExists”,
“singleValue”: true
}
}
],
“combinator”: “and”
},
“renameOutput”: true,
“outputKey”: “Audio”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.switch”,
“typeVersion”: 3.2,
“position”: [
-2720,
360
],
“id”: “c546adc9-2a73-41bc-b7e6-4b580f3e2f30”,
“name”: “Switch”
},
{
“parameters”: {
“mode”: “retrieve-as-tool”,
“toolName”: “Pinecone”,
“toolDescription”: “Retrieve data from transcripts for satisfying user’s query”,
“pineconeIndex”: {
“__rl”: true,
“value”: “zhang-db”,
“mode”: “list”,
“cachedResultName”: “zhang-db”
},
“topK”: 8,
“options”: {
“pineconeNamespace”: “JZ”
}
},
“type”: “@n8n/n8n-nodes-langchain.vectorStorePinecone”,
“typeVersion”: 1,
“position”: [
-2200,
700
],
“id”: “34858697-a65c-4970-a0b7-3e8cd374ce18”,
“name”: “Javan Zhang”,
“credentials”: {
“pineconeApi”: {
“id”: “77DlESSZEOkmmJN9”,
“name”: “PineconeApi account”
}
}
},
{
“parameters”: {
“chatId”: “={{ $(‘Telegram Trigger’).item.json.message.chat.id }}”,
“text”: “a”,
“additionalFields”: {}
},
“type”: “n8n-nodes-base.telegram”,
“typeVersion”: 1.2,
“position”: [
-1100,
340
],
“id”: “9e30e0e5-3312-4652-be55-2327b0e9d4e9”,
“name”: “Telegram”,
“webhookId”: “835d781b-175f-4004-93e8-768bb21f7af6”,
“credentials”: {
“telegramApi”: {
“id”: “HoA9ebkoEpueVARl”,
“name”: “Tester”
}
}
},
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “6c918935-d280-43d5-91fa-b7b2b495d5c0”,
“name”: “text”,
“value”: “={{ $json.text }}{{ $json.body.message.toolCallList[0].function.arguments.Query }}{{ $json.chatInput }}{{ $(‘Telegram Trigger’).item.json.message.text }}”,
“type”: “string”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 3.4,
“position”: [
-2460,
340
],
“id”: “2e8e4c94-08f6-44f7-8f9b-1a8a6df1727a”,
“name”: “Body Text”
}
],
“connections”: {
“Embeddings OpenAI2”: {
“ai_embedding”: [
[
{
“node”: “Javan Zhang”,
“type”: “ai_embedding”,
“index”: 0
}
]
]
},
“OpenAI Chat Model1”: {
“ai_languageModel”: [
[
{
“node”: “Deepseeked De-Jsoned”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“OpenAI1”: {
“main”: [
[
{
“node”: “Telegram”,
“type”: “main”,
“index”: 0
}
]
]
},
“Structured Output Parser”: {
“ai_outputParser”: [
[
{
“node”: “Deepseeked De-Jsoned”,
“type”: “ai_outputParser”,
“index”: 0
}
]
]
},
“Deepseeked De-Jsoned”: {
“main”: [
[
{
“node”: “Link-inator”,
“type”: “main”,
“index”: 0
}
]
]
},
“Link-inator”: {
“main”: [
[
{
“node”: “Neat-inator”,
“type”: “main”,
“index”: 0
}
]
]
},
“Aggregate”: {
“main”: [
[
{
“node”: “OpenAI1”,
“type”: “main”,
“index”: 0
}
]
]
},
“Neat-inator”: {
“main”: [
[
{
“node”: “Aggregate”,
“type”: “main”,
“index”: 0
}
]
]
},
“Telegram Trigger”: {
“main”: [
[
{
“node”: “Switch”,
“type”: “main”,
“index”: 0
}
]
]
},
“Switch”: {
“main”: [
[
{
“node”: “Body Text”,
“type”: “main”,
“index”: 0
}
],

]
},
“Javan Zhang”: {
“ai_tool”: [
[
{
“node”: “Deepseeked De-Jsoned”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Body Text”: {
“main”: [
[
{
“node”: “Deepseeked De-Jsoned”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “9d2ce770f045bd348f1244940ac553bce2bc066863672f7c4812dd9dbcc020fd”
}
}

Information on your n8n setup

  • n8n version: 1.77.3
  • Database (default: SQLite): Default (I assume)
  • n8n EXECUTIONS_PROCESS setting (default: own, main): I am not sure where to get that info.
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Cloud
  • Operating system: Windows 11

When you use Code node you may end up losing the reference of the order of each item.

See item linking concept.

Try to preserve something that identifies the item. Maybe some id.
Or change your code node to “Run once for each item”

Additionally, to share your workflow here, put the JSON code inside a code block:

like this

.
:muscle: If my reply answers your question, please remember to mark it as a solution.

2 Likes

Hey @Luar_AS were you able to solve this issue?

If yes, then please mark one of the replies as the solution so we can close this topic.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.