I have a self deployed n8n instance, the http node can make calls to RFC1918 addresses, 169.254.169.254, and localhost which can be used to get instance metadata and call internal endpoints. this allows the external user to call the internal endpoints and acquire information about the node running the service.
What is the error message (if any)?
Please share your workflow
Share the output returned by the last node
would return the instance metadata including the aws access key when provided the token. token can be retrieved with put token call on same endpoint
Information on your n8n setup
n8n version:
Database (default: SQLite): postgres (RDS)
n8n EXECUTIONS_PROCESS setting (default: own, main): main
Running n8n via (Docker, npm, n8n cloud, desktop app): kubernates
This is probably a docker problem. You might need to give docker some parameters so it can reach your servers. Can you access theserver from this or a different Docker Container?
You are correct, this is a serious security risk. The solution is not to change the workflow, but to configure your n8n server.
n8n has built-in security settings for this. You need to add these two environment variables to your Kubernetes deployment file:
N8N_BLOCK_INTERNAL_NETWORKS=true
This is the main setting. It blocks all requests to private networks, localhost, and the cloud metadata service (169.254.169.245). N8N_ALLOWED_HOSTS=your-internal-db.local,your-internal-api.local
(Optional) Use this to create an “allow list” for any specific internal services that your workflows do need to access.
Action Plan:
Add these environment variables to the env: section of your n8n Kubernetes deployment file.
Apply the changes (kubectl apply …). Your n8n pod will restart with the security protections enabled.
If you found this helpful, please mark it as the solution and give it a like .
hey @ChrisOnN8N tried blocking network access on docker network level, he problem is the pod then is unable to reach the db also which is on internal network.