Run n8n docker hub image on AWS ECS Fargate with EFS

Describe the problem/error/question

I created an AWS ECS Service on Fargate using n8n docker hub image ‘n8nio/n8n:latest’. The service uses EFS with access point that mounts to ‘/home/node/.n8n’. The EFS access point’s uid and gid is set to 1000:1000 and permission 0755. The VPC security groups allow the EFS traffic. The EFS system policy is set to allow “elasticfilesystem:ClientRootAccess”, “elasticfilesystem:ClientWrite”, “elasticfilesystem:ClientMount”. But I couldn’t start up the service successfully on ECS Fargate due to some directory access issue. Any help is appreciated.

What is the error message (if any)?


root: /usr/local/lib/node_modules/n8n
code: EACCES
message: EACCES: permission denied, open ‘/home/node/.n8n/config’
See more details with DEBUG=*
(Use node --trace-warnings ... to show where the warning was created)

Please share your workflow

n/a

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

n/a

Share the output returned by the last node

Information on your n8n setup

  • **n8n version:**n8nio/n8n:latest
  • **Database (default: SQLite):**SQLite
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):**unknown
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):**as container on AWS ECS Fargate
  • **Operating system:**Linux/ARM64

can you access on cointainer shell?
https://phase2.github.io/devtools/common-tasks/ssh-into-a-container/

Maybe you can check files and folders owner and permission

I can’t access the container as the ECS service cannot start up properly. The task was deprovisioned before it reached the running state. Here’s the volume details of the task definition and EFS access point config. Do you see any problems?

Hi,

As a test did you try to set 0777 to see if the error disappears?

If i remember from the past, there is something with recursive rights and ownership.

Also this might be relevant : ubuntu - EFS mount failing with mount.nfs4: access denied by server - Stack Overflow

regards
Jiri.

Hi,

Before I tried 0777, I noticed a config file was created under ‘/home/node/.n8n’. And the config file has ‘-rw-r–r–. 1 1000 1000 56 Apr 4 05:55 config’. Then I created a new access point with 1000:1000 and 0777 and tried to redeploy and launch the service on Fargate, the container exit with code 1. And there’s no CloudWatch log any more. What else can I try?

Regards,
PH

Hi, while I was working with the k8 examples for n8n i came across something interesting

  initContainers:
    - name: volume-permissions
      image: busybox:1.36
      command: ["sh", "-c", "chown 1000:1000 /data"]
      volumeMounts:
        - name: n8n-claim0
          mountPath: /data

This confirms with my ideas that some changes where needed after mounting with EFS as well.

I don’t have further ideas unfortunately.

Reg,
J.

1 Like

Dont know ECS service, i try to brainstorm with you.

n8n image run under a non-root user called node
If you have the UID and GID of the ‘node’ user from the n8n container, (maybe) you could update values on your EFS access point

This becouse maybe 1000:1000 be a root user

1000:1000 is correct based on the N8N image. There is a node user eXactly with this created. This is why he tried to set it explicitly.

Reg,
J.

Hi, seems it’s not the EFS problem. I removed EFS from Task Definition and redeployed the service, it still crashed with container exit code: 1. No log items in the CloudWatch log file. What else shall I try?

Sorry I am out of ideas tbh, maybe you can check the following terraform template to see what is actually required to get it working

You can take/understand the parts you need

Reg,
J.

1 Like

Hi, this has not been resolved. Any help is appreciated.

PH

Hi, did you try with the template?
Even a simple test actually using the exact template should give you a definite answer on what is going on. You could use terraform and API keys and it can be tested in 30min.

Reg
J.

Hi, we use CloudFormation and AWS CDK instead of terraform. Since this is not really urgent for us, will look into the template later. Thanks for the response.

PH

Hi, the point of the template is to start from a known working setup and not to waste any unnecessary time. Urgent or not.

Reg
J

you’re running into a permissions error (EACCES) because the n8n Docker container running in AWS Fargate cannot write to the mounted EFS directory /home/node/.n8n — even though you’ve seemingly set UID/GID and permissions correctly.

Trying to do something similar, following for any updates

We managed to get it up and running with efs finally. Here’s the AWS Copilot manifest.yml file FYR.

The manifest for the “n8n” service.

Read the n8n environment variables documentation at:

Environment Variables Overview | n8n Docs

Read the full specification for the “Load Balanced Web Service” type at:

Load Balanced Web Service - AWS Copilot CLI

Your service name will be used in naming your resources like log groups, ECS services, etc.

name: n8n
type: Load Balanced Web Service

Distribute traffic to your service.

http:

Requests to this path will be forwarded to your service.

To match all requests you can use the “/” path.

path: ‘/’

You can specify a custom health check path. The default is “/”.

healthcheck:
path: ‘/healthz’
healthy_threshold: 2 # number of consecutive health check successes required before considering an unhealthy target healthy
unhealthy_threshold: 4 # number of consecutive health check failures required before considering a target unhealthy
interval: 30s # amount of time, in seconds, between health checks
timeout: 10s # amount of time, in seconds, during which no response from a target means a failed health check
grace_period: 360s # grace period within which to provide containers time to bootstrap before failed health checks count towards the maximum number of retries
deregistration_delay: 30s # amount of time to wait for targets to drain connections during deregistration. align with n8n graceful shutdown timeout

Configuration for your containers and service.

image:
location: n8nio/n8n:latest # The location of n8n image in the Docker Hub.

location: n8nio/n8n:stable

Port exposed through your container to route traffic to it.

port: 5678

cpu: 1024 # Number of CPU units for the task.
memory: 2048 # Amount of memory in MiB used by the task.
count: 1 # Number of tasks that should be running in your service.

platform: linux/x86_64

platform: linux/arm64
exec: true # Enable running commands in your container.

Storage configuration for n8n

storage:
volumes:
n8n-data:
path: ‘/home/node/.n8n’ # Mount path inside the container where n8n stores data.
read_only: false
efs:
id:
auth:
iam: true
access_point_id:

mount_points:

  • source_volume: n8n-data # Must match the name under storage.volumes
    container_path: /home/node/.n8n # The required path inside the n8n container
    read_only: false

network:
vpc:
placement: ‘private’

variables:
N8N_LOG_LEVEL: ‘error’
GENERIC_TIMEZONE: ‘’
N8N_TEMPLATES_ENABLED: ‘true’
N8N_METRICS: ‘true’
N8N_USER_FOLDER: ‘/home/node’

Optional: Basic Auth for n8n UI

N8N_BASIC_AUTH_ACTIVE: “true”
N8N_BASIC_AUTH_USER: “”
N8N_BASIC_AUTH_PASSWORD: “”

You can override any of the values defined above by environment.

environments:
test:
count: 1 # Number of tasks to run for the “test” environment.
cpu: 512
memory: 1024
variables:
N8N_HOST: ‘n8n.test.’
N8N_EDITOR_BASE_URL: ‘https://n8n.test.’
production:
count: 1
variables:
N8N_HOST: ‘n8n.production.’
N8N_EDITOR_BASE_URL: ‘https://n8n.production.’

The format and the URLs were messed up after I posted. But you should be able to figure out based on the above.