GCP credentials in n8n (Cloud Run)

Hey :waving_hand:

We’re running n8n on GCP Cloud Run.

We want to use GCP services (e.g. Vertex AI / Gemini) using the Cloud Run service account (ADC) — so no manual credentials.

Question

Is there a way in n8n to use GCP credentials via Application Default Credentials / metadata server , instead of providing a service account JSON in the UI?

Concern

Managing JSON keys per instance feels:

  • insecure
  • hard to rotate/manage at scale

We’d strongly prefer using Cloud Run identity + short-lived tokens .

Is this supported or planned? What’s the recommended approach?

Thanks :folded_hands:

2 Likes

Hi @rgrzesk

I don’t think this is natively supported in n8n’s Google credential UI today. From the docs, n8n still expects OAuth2, service account credentials, or API key depending on the Google node, while Cloud Run ADC works when the application code uses Google’s auth libraries against the metadata server. So the recommended workaround for now is either a standard n8n Google credential or a custom HTTP/code-based approach that uses the Cloud Run service identity explicitly.

1 Like

Yeah we’ve run into this with Cloud Run too. ADC via metadata server isn’t natively supported in n8n’s GCP node yet — it always expects a service account JSON in credentials. Workaround that worked for us: use a lightweight auth-sidecar (just a simple Node.js script that hits the metadata server and refreshes access tokens every 55 min), then pass the rotating token to n8n via HTTP Request + generic OAuth2 node instead of the built-in GCP node. a bit hacky but the token rotation is handled by the sidecar so you don’t have to manage keys manually. alternatively, if you’re self-hosting, you could use a mounted secret in Kubernetes and let n8n read it at startup — not as elegant as ADC but avoids JSON key management too.

1 Like

Thanks @tamy.santos & @Benjamin_Behrens for your response. While searching I already found open PR with the solution feat(Google Vertex Node): Add Application Default Credentials (ADC) authentication support by MojRoid · Pull Request #22942 · n8n-io/n8n · GitHub

Hope we can expect this to be merged soon…

2 Likes

@rgrzesk nice find on taht PR, until it lands you can skip the sidecar thing and just hit the metadata server directly from a Code node — http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token with the Metadata-Flavor: Google header, grab the access_token and pass it into an HTTP Request node for Vertex calls. way simpler than running a separate process.

1 Like

Both workarounds above get you past the “no JSON key in container” requirement, but there are two second-order issues worth sizing before you commit:

1. **Token refresh is not just “every 55 min.”** Metadata-server tokens are 1h by default, but if your Cloud Run instance is min-instances=0 and cold-starts, the sidecar/Code-node caches are useless on the first invocation after scale-to-zero. You’ll want a refresh-on-demand pattern where the 401 from upstream triggers a fresh metadata call, not a scheduled refresh. Without that, the first workflow run after a cold start silently fails auth.

2. **If you add AWS or Azure later, the abstraction breaks.** The Code-node pattern hard-codes `metadata.google.internal`. The sidecar pattern assumes GCP’s 169.254.169.254 shape. As soon as a second cloud enters the picture, you’re either maintaining two parallel auth paths in every workflow, or you centralize token exchange somewhere outside n8n.

The native ADC PR (#22942) solves problem 1 for Vertex specifically but not problem 2, and not for any non-Google node.

An alternative worth considering: pull credential exchange out of n8n entirely into a small proxy that sits between n8n and the cloud APIs. n8n just points HTTP Request at `proxy/vertex/…` with a static internal token; the proxy handles ADC, metadata refresh, cross-cloud abstraction. We’ve been building something open-source in that shape — happy to DM a pointer if your setup is heading multi-cloud.

1 Like

@achamm Nice one! To get the token I would need to make a HTTP Request, right?
I was searching through available credentials types, but couldn’t find anything suitable I could reuse across many workflows. What would be the best practice/suggestion how to tackle that? Create reusable workflow?

@nwnwnw413 thanks! cold start won’t be an issue for us at all. Additionally, we won’t have a plan to utilize AWS/Azure AI capabitilies atm.

But nice comments, worth considering.

1 Like

@rgrzesk yeah exactly, HTTP Request to the metadata endpoint from a Code node. for reuse across workflows, put the token-fetch logic in a separate workflow and call it with Execute Workflow node — that way every workflow just calls one shared “get-token” workflow and gets the access_token back, no duplicating the code everywhere.