I’m trying to use the Read/Write Binary File node to save results within a Loop node. The loop is supposed to merge previously created videos into one long video.
I am facing one major issue: When trying to write the file to disk, I constantly receive errors such as: "/home/node/meine_datei.webm" is not writable or permission denied.
Things I’ve already tried: I updated my docker-compose.yml to include the correct volume mounts and set the N8N_RESTRICT_FILE_ACCESS_TO environment variable for both the main n8n service and the task-runner service. I also changed the output path to /files/meine_datei.webm to comply with these restrictions and recreated the containers (docker-compose down && up -d).
To rule out host permission issues, I applied chown -R 1000:1000, tested chmod 777, and even added the :z flag for SELinux, but the error persists.
I am using Coolify for deployment—could that be the reason? Or am I fundamentally misunderstanding how this node works? Also, if anyone has a better idea for merging videos within n8n, I’m all ears.
Since you’re on version 2.1.5 and using Task Runners, a common “gotcha” is the shared filesystem . Even if you’ve set the N8N_RESTRICT_FILE_ACCESS_TO variable correctly, both the main n8n container and the task-runner container need to have the exact same volume mounted at the exact same path. If they aren’t perfectly synced, the task-runner might be trying to write to a directory that only exists in its own isolated ephemeral storage.
In Coolify, double-check that your files volume is actually attached to bothservices in your compose file.Just as a test, have you tried writing to a subdirectory within the default /home/node/.n8n/ folder? Sometimes that bypasses external mount headaches.
merging files inside a loop using binary nodes can be really tough on memory. Most people in the community find it much smoother to use ffmpeg via the Execute Command node if your Docker image supports it. It’s usually faster and less prone to crashing the workflow.
By the way, could you share how you defined the volumes for the task-runner in your compose file?
Honestly for merging videos in n8n you’re probably better off using an Execute Command node with ffmpeg instead of trying to juggle binary files through the Read/Write node, way less headache with permissions that way. If you do need the file node though, make sure the task-runner container has the exact same volume mount at the same path as your main n8n container, Coolify sometimes doesn’t propagate those mounts to both services automatically.
I added this to both services (main n8n and the task runner). That didn’t work, so I used Gemini to troubleshoot, and it suggested making sure the named volume is explicitly declared at the end of the compose file like this:
YAML
volumes:
n8n-data: null
Unfortunately, this didn’t change anything either. I tried saving a test file to /home/node/.n8n/test.webm and /.n8n/test.webm, but I still get the same error: The file is not writable or permission denied.
And thanks for the Tipp with ffmpeg i will try this when i finally manage to safe the viedeos to the disk
@marexxxxxxx
If you overwrite the exact same file (meine_datei.webm) inside a fast loop, the first iteration locks the file open. When the second iteration tries to write, you get a “permission denied” error.
Change the filename to something dynamic, like meine_datei_{{$runIndex}}.webm. If it writes successfully, you’re just dealing with a concurrency lock, not a Docker permission problem.
Coolify bind mounts are incredibly stubborn and often ignore host-level chown commands. Bypass the host OS entirely and force the ownership change from inside the running container: docker exec -u root -it <your-main-n8n-container> chown -R node:node /files
@marexxxxxxx
Are you running n8n in queue mode with Redis? If so, the Read/Write node actually executes on the Worker container, not the main one and definitely not the task-runner. If you have a worker, the volume has to be mounted there.
No, I don’t think so. To do that, I would need a Redis instance in my Docker Compose, right? If that’s the case, then no—there is no Redis instance. There are only the two services: n8n and the task runner.
Since your original docker-compose.yml mapped the volume to /files, what happens if you point the Read/Write node exactly to /files/meine_datei_{{$runIndex}}.webm?
i think since the host folder was likely created by the root user before the container started, and now the node user inside n8n is being locked out by the OS.
Run this command in your server terminal to force the node user to own that folder from within the container’s perspective: docker exec -u root -it <your-n8n-container-name> chown -R node:node /files
No, I am getting exactly the same error message. Here is the detailed stack trace:
Item Index
0
Node type
n8n-nodes-base.readWriteFile
Node version
1.1 (Latest)
n8n version
2.1.5 (Self Hosted)
Time
3/5/2026, 7:48:36 PM
Stack trace
NodeApiError: The file “/files/meine_datei.webm” is not writable. at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-nodes-base@file+packages+nodes-base_@[email protected]_asn1.js@5_8da18263ca0574b0db58d4fefd8173ce/node_modules/n8n-nodes-base/nodes/Files/ReadWriteFile/actions/write.operation.ts:130:10) at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-nodes-base@file+packages+nodes-base_@[email protected]_asn1.js@5_8da18263ca0574b0db58d4fefd8173ce/node_modules/n8n-nodes-base/nodes/Files/ReadWriteFile/ReadWriteFile.node.ts:69:17) at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_ec37920eb95917b28efaa783206b20f3/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1045:8) at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_ec37920eb95917b28efaa783206b20f3/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1226:11) at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_ec37920eb95917b28efaa783206b20f3/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1662:27 at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_ec37920eb95917b28efaa783206b20f3/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2274:11
This is a Coolify-specific issue that trips up a lot of people. The problem is that Coolify manages volumes differently than raw docker-compose — when you add volumes through the Coolify UI or compose file, it doesn’t always apply them consistently to both the main n8n service AND the task-runner service.
The actual fix for Coolify:
In your Coolify service config, make sure BOTH services have the volume defined identically. The task-runner is a separate service and needs its own explicit volume mount — Coolify won’t inherit it automatically.
Set N8N_RESTRICT_FILE_ACCESS_TO=/files on BOTH services.
The Coolify container runs n8n as user node (UID 1000). The issue is usually that your host directory /data/n8n/files is owned by root or a different UID. Fix this on the host:
After changing host permissions, do a full redeploy in Coolify (not just restart) — Coolify caches container configs and a simple restart won’t pick up the volume ownership change.
For the video merging part:
A_A4 is right about ffmpeg via Execute Command node being much better for this use case. Even better — use an HTTP Request node to call a simple API endpoint that handles the merge, so you keep n8n as pure orchestration. Binary file handling inside n8n loops is painful for large files.
The permission error at write.operation.ts:130 confirms it’s a pure filesystem ownership issue, not an n8n bug. Fix the host directory ownership and you’re done.