Hello n8n community,
I am running into an unexpected behavior with the Read/Write Files from Disk node when reading files in a loop, and it appears related to how n8n processes dynamic expressions versus fixed strings.
Goal: I need to read the same JSON workflow file multiple times (e.g., 5 times) in a sequential loop to process different input data from an upstream Excel sheet.
Setup:
An upstream node outputs 5 items, where multiple items point to the exact same file path (e.g., Row 1 and Row 2 both point to /path/to/workflow.json).
The workflow is running sequentially using Split in Batches (Concurrency: 1).
The next node is Read/Write Files from Disk (Operation: Read File(s) From Disk).
Observed Behavior:
| File Selector Setting | Output Items from Read Node | Behavior | | :— | :— | :— | | Fixed String: (e.g., C:/…/workflow.json) | 5 Items | Reads successfully 5 times (as expected for the loop). | | Dynamic Expression: (e.g., {{ $json.workflow_path }}) | 2 Items | Outputs only the unique files (deduplication occurs). |
Question:
Why does the Read/Write Files from Disk node implicitly deduplicate the file reading when the path is set via an Expression, but not when it is set via a Fixed string?
Is there an option or setting within the node to disable this implicit cache/deduplication when using dynamic paths, so I can ensure all 5 items are processed, even if they point to the same file?
I am currently considering using a Code node to add a unique timestamp query to the file path to bypass this, but I’d prefer a native solution if one exists.
Any insight into this internal optimization logic would be greatly appreciated!
Thank you!
Thank you very much for the fast response and the helpful tip regarding n8n’s implicit iteration. You are absolutely right that in most cases, a separate Loop node is not needed.
However, for my specific workflow, there are two key reasons why I need to strictly control the iteration:
1. Sequential Execution is Mandatory (The Loop): My downstream node calls a ComfyUI API to generate images/videos. This service is single-threaded and cannot handle parallel requests. If I remove the loop, n8n executes the 5 API calls concurrently, which leads to immediate task failure or unpredictable errors on the ComfyUI server.
Solution: I am using the Split In Batches node with Batch Size = 1 and Concurrency = 1 to force a strictly sequential (one-by-one) execution.
2. File Deduplication Issue (The Expression): This is the trickiest part. Even when running sequentially, if I remove the Loop node, I still run into an issue with the Read/Write Files from Disk node. My Excel sheet might have 5 rows pointing to the same JSON file.
When I use a fixed file path, the node runs 5 times.
When I use an expression (e.g., {{$json.workflow_path_filename}}), the node performs an implicit deduplication and only outputs 2 items (for the 2 unique file paths).
I need the Read/Write Files from Disk node to execute 5 times regardless of whether the file path is the same. I suspect the expression evaluation triggers an internal cache/optimization that the fixed string doesn’t.
I am now looking for a way to disable this implicit deduplication when using expressions, or I will use a Code node to add a unique timestamp to the file path to trick the node into seeing 5 unique file paths.
Any further advice on disabling the file read node’s implicit deduplication would be greatly appreciated!
I understand now,
Yes, you’re correct to use the Loop node here,
You should probably also add a “Wait” node…
Your issue is that some of your workflow_path_filename values are invalid file pathes, so there are 2 valid paths and 3 invalid ones..
That’s why the Read/Write Files from Disk node doesn’t read those files and produces no output for them.
This isn’t actually a “deduplication” issue, it’s simply that the file paths don’t exist, so there’s no output.
However, if you want to ignore invalid files pathes and still produce output (empty output), you can enable the “Always Output Data” option:
That is an excellent point and you actually led me to discover the true cause of the missing items!
You were absolutely right: it was not a deduplication problem!
The Real Cause: My input file paths in the Excel sheet were invalid. I had manually created the first row’s file path (C:/.../wan_fl2v_test.json), but then I dragged the cell down in Excel to quickly create the other four rows.
This resulted in paths like:
Row 1: .../wan_fl2v_test.json (Valid)
Row 2: .../wan_fl3v_test.json (Invalid - File does not exist on disk)
Row 3: .../wan_fl4v_test.json (Invalid - File does not exist on disk)
The Read/Write Files from Disk node was simply failing to read the 3 non-existent files and therefore did not produce any output for those items, which looks exactly like a deduplication error in the final output.
Solution: I will either fix the paths in my Excel sheet or enable the “Always Output Data” option in the file node to ensure I get 5 items, even if the file is missing, as you suggested.
Thank you so much for pointing me towards the invalid file path as a potential cause, and for confirming the necessity of the Split In Batches node for my sequential ComfyUI calls. Your advice was incredibly helpful!