Hi n8n Community,
I am running a self-hosted instance of n8n (v2.8.3) on Windows via npm. I am building a high-volume pipeline to process 30,000+ files that are uploaded in a specific sequential order.
The Workflow Logic: Files are uploaded in sets:
-
One “Context” File: Contains global information (Brand/Category).
-
Repeated Pairs of Files: Consisting of one high-res Media file and one Technical Data file.
The Goal: I need to merge the Media URL and the Technical Data into a single database record, while “sticking” the initial Context info (from step 1) to every subsequent record in that batch using $getWorkflowStaticData.
The Problems I am Facing:
-
Metadata Loss: In my Google Drive Trigger, I have enabled
fields: *. In the Download node, I haveAlways Output Dataenabled. However, by the time the item reaches the Code node, thewebContentLink(the public URL) is missing or blank. Is there a specific way to “lock” this metadata so it survives the binary download step? -
Code Node “Silent” Failure: My Code node is designed to return output only when a pair is completed. However, it often shows “No Output” even when the upstream AI node provides a valid JSON classification. Is there a known persistence issue with static data in sequential triggers on self-hosted Windows instances?
-
Sequential Batching Architecture: Since I am processing a 1:2 ratio (1 Image + 1 Data Label), is a Code node with static memory the most stable way to “zip” these two files, or should I be using an Aggregate or Merge node to handle the batching?
-
Impact of Wait Node: I have a
Waitnode at the start. Does this risk desyncing the file order if many files are triggered simultaneously? -
The code is shown below-
{
“nodes”: [
{
“parameters”: {
“triggerOn”: “specificFolder”,
“folderToWatch”: “DUMMY_FOLDER_ID”,
“event”: “fileCreated”,
“options”: { “fields”: “*” }
},
“type”: “n8n-nodes-base.googleDriveTrigger”,
“typeVersion”: 1,
“name”: “Google Drive Trigger”
},
{
“parameters”: {
“operation”: “download”,
“fileId”: “={{ $json.id }}”,
“options”: { “alwaysOutputData”: true }
},
“type”: “n8n-nodes-base.googleDrive”,
“typeVersion”: 3,
“name”: “Download file”
},
{
“parameters”: {
“jsCode”: “const staticData = $getWorkflowStaticData(‘global’);\nconst input = items[0].json;\nconst foundUrl = input.webContentLink || input.fileWebContentLink;\nconst aiType = (input.output?.Classification || ‘’).toLowerCase();\n\nif (aiType.includes(‘context’)) {\n staticData.savedContext = input.output?.Main_Info;\n return [{ json: { status: ‘Context Saved’ } }];\n}\n\nif (aiType.includes(‘media’)) {\n staticData.lastMediaUrl = foundUrl;\n return [{ json: { status: ‘Media URL Cached’, url: foundUrl } }];\n}\n\nif (aiType.includes(‘technical’)) {\n return [{ json: {\n Global_Context: staticData.savedContext || ‘N/A’,\n Linked_Media_URL: staticData.lastMediaUrl || foundUrl,\n Ready_To_Push: true\n }}];\n}\nreturn [{ json: { status: ‘Skipped’, type: aiType } }];”
},
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“name”: “Sequential Merger Logic”
}
]
}
Environment Details:
-
n8n Version: 2.8.3
-
OS: Windows (npm install)
-
Binary Storage Mode: Filesystem
-
AI Model: Google Gemini 2.5 Flash
Any advice on how to keep the metadata intact and ensure the 1:2 merging is stable for large batches would be greatly appreciated!