I’m working on a workflow where I need to extract regulation names and their corresponding URLs from “Sheet A” of an uploaded Excel file. Subsequently, I want to compare the scraped regulation articles with the articles in the respective sheets within the same Excel file that correspond to the regulation names.
I’ve encountered a few challenges:
My n8n instance is hosted via Hugging Face Spaces, so I’m using the Chat Node to upload the Excel file.
To compare with all sheets in the Excel file, I need to access every sheet. However, the Extract from File node currently only allows specifying one sheet at a time. This would require duplicating the uploaded Excel file many times, which then leads to a “data too large” error.
My questions are:
Is there a way to open all sheets in an Excel file at once and temporarily store them for subsequent nodes to access?
If I want to compare the content from an Excel sheet with scraped web content, is there a more efficient or recommended approach?
Any guidance or suggestions would be greatly appreciated!
As the Read/Write Files from Disk node documentation points out, you must use this node (available only in self-hosted environments, not in the Cloud) to write files to disk:
Write File to Disk Operation — you define the destination path and filename. The file is taken from a binary field (such as data) from previous nodes (e.g., the Chat Node).
In self-hosted environments, you can configure binary file storage on disk by modifying variables such as:
N8N_DEFAULT_BINARY_DATA_MODE=filesystem — to enable saving the binary to disk.
N8N_BINARY_DATA_STORAGE_PATH — specifies the folder where n8n should store binary data
Example flow
Chat Trigger (or Chat Node) → receives the file
The file remains in item.binary.data
Connect a Read/Write Files from Disk configured as:
Operation: Write File to Disk
File Path and Name: the path where you want to save the file (e.g., /tmp/{{ $binary.data.fileName }})
Input Binary Field: data
The node creates the file on disk, so you have a real file path.