Running n8n self-host AI in Google Cloud Platform

I followed this tutorial

and finish the installation and get the docker and services running

weilies_chok@cloudshell:~ (cloud-xp-440601)$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS                   PORTS                              NAMES
4b532c8022cc   n8nio/n8n:latest       "tini -- /docker-ent…"   6 minutes ago   Up 5 minutes             0.0.0.0:5678->5678/tcp             n8n
f191fdcfd326   postgres:16-alpine     "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes (healthy)   5432/tcp                           self-hosted-ai-starter-kit-postgres-1
a2b2f968e36c   qdrant/qdrant          "./entrypoint.sh"        6 minutes ago   Up 6 minutes             0.0.0.0:6333->6333/tcp, 6334/tcp   qdrant
2c4ea525fbef   ollama/ollama:latest   "/bin/ollama serve"      6 minutes ago   Up 6 minutes             0.0.0.0:11434->11434/tcp           ollama

Machine config

Machine type: e2-micro
CPU platform: Intel Broadwell
Architecture: x86/64
OS: debian-12-bookworm-v20241009

i am certain my GCP external IP is correct. But how to get it run in cloud? Thanks for the pointer. I understand the GitHub project is mean to run the whole AI project locally. But it would be fun if i could run in a cloud (as it’s always free tier), and i am find to use my OpenAI’s API key instead of Ollama due to the limited power in my VM

n8n version: n8n:latest
Database (default: SQLite): PostgreSQL
n8n EXECUTIONS_PROCESS setting (default: own, main): default
Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
Operating system: debian-12-bookworm-v20241009

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

updated with the missing info.

Hey @weilies_chok,

An e2 micro instance likely won’t have the power to run everything properly, If you wanted to give it a bash anyway the first thing to do would be to work out how you want to expose the service to the outside world. You could do this by opening port 5678 on the firewall or use a reverse proxy / tunnel to handle it.

Looking at the error though I suspect the problem is down to firewall rules not being in place so I would start there.

shouldn’t be prob to run in e2 micro coz my AI model pointed to openAI (consuming my credit instead of run the ollama locally)

sites-available/xxx.duckdns.org.conf … “xxx” is my subdomain from duckDNS
here the setup

server {
    server_name xxx.duckdns.org;

    location / {
        # Adjust this to point to the service you want to serve, e.g., an app running on port 5678
        proxy_pass http://localhost:5678;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Upgrade $http_upgrade;
        proxy_http_version 1.1;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/xxx.duckdns.org/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/xxx.duckdns.org/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = xxx.duckdns.org) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    # listen 80;
    listen 5678;
    server_name xxx.duckdns.org;
    return 404; # managed by Certbot


}

site-enabled/xxx.duckdns.org.conf

server {
    server_name xxx.duckdns.org;

    location / {
        # Adjust this to point to the service you want to serve, e.g., an app running on port 5678
        proxy_pass http://localhost:5678;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Upgrade $http_upgrade;
        proxy_http_version 1.1;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/xxx.duckdns.org/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/xxx.duckdns.org/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = xxx.duckdns.org) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    # listen 80;
    listen 5678;
    server_name xxx.duckdns.org;
    return 404; # managed by Certbot


}

GCP firewall also opened TCP for 5678, 80

@weilies_chok while it points to OpenAI it still needs to run things locally and you are putting a lot on very small resources it is just something to keep in mind.

Looking at your config now you are trying to access n8n on port 5678 but nginx will likely be trying on 443 but as you are trying to use the IP and port 5678 I would make sure you have n8n set to listen on the correct host network on port 5678 in your docker config or check that it is at least running in the container then you can try curl from the host to see if you can reach localhost:5678 and move out from there.

sorry Jon, i supposed to close the card. I resolved it by using an ngrok plugin. Cheer!

1 Like