I have an n8n workflow that reads files from an FTP server, and if I don’t have such files locally, it then downloads from the FTP and writes it locally.
In a previous workflow (similar setup), it executed flawlessly without any errors. However, as I needed to make some adjustments, I had to put n8n on a docker container to execute certain commands. I lost my workflow, so used a backup, imported onto the new n8n container, and it’s no longer working.
When I download from the FTP server, it errors when I exceed 2GiB, which is confusing as I’ve never had this issue before. I’ve seen suggestions of changing the env to filesystem, which I’ve done, but it doesn’t solve the problem.
For now, I’m streaming directly to disk, but it’s not ideal for me as FTP to writing was a much more silent process with holding all the data in memory before writing it all as one.
Thanks for sharing the detailed explanation, that helps a lot.
The issue you’re facing seems related to the 2 GiB memory buffer limitation that can occur when n8n runs inside Docker, especially when large binary data (like FTP downloads) are handled entirely in memory.
When you were running n8n outside Docker earlier, the Node.js process likely had access to more flexible memory allocation, so it could handle files above 2 GiB. Inside Docker, however, memory constraints (and Node.js’s internal Buffer size limit) can trigger this error once you cross that threshold.
You already tried setting N8N_BINARY_DATA_MODE=filesystem, which was a good step, but keep in mind you may also need to specify the filesystem path and ensure the Docker container has write permissions there. Try setting:
And make sure that this directory exists and is mounted correctly in your Docker volume.
Example in your docker-compose.yml or docker run command:
volumes:
./n8n_data:/home/node/.n8n
After that, restart the container and check if the FTP node can handle large files again.
If the issue persists even after switching to filesystem mode, it’s likely the FTP node itself is still trying to load everything into memory. In that case, streaming directly to disk (as you’re doing) is the most stable approach, though I understand it’s less “silent.”
You could also consider adding a custom Function node or Command node to handle the download in chunks and write incrementally, bypassing the memory limit entirely.
I’ve tried the suggested, and unfortunately it was still an issue.
I tried migrating it to a natively run n8n, but still having issues. Any suggestions?
I’ve read that it’s a limitation of the FTP node, however I’ve downloaded 20Gb files before migrating to docker with no issue, so not sure why it’s an issue now.