OpenAI Node & Merge Node don´t have Input connection, only output

Describe the problem/error/question

When running locally via Docker everything seems to work, nodes are connected automatically, inputs appear. However, when hosted on GCP CloudRun, the inputs don’t appear at all. The instruction https://docs.n8n.io/hosting/installation/server-setups/google-cloud-run/ was followed when setting up.
Tried with both: free and business licenses, same problem. Similar issue has been risen in the past but no clear instruction to resolve it: Unable to link nodes as input to certain nodes (openai / llm related) · Issue #15944 · n8n-io/n8n · GitHub

What is the error message (if any)?

We don´t have error message in the log..

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.2.4
  • Database (default: SQLite): ostgreSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker via GCP (cloud run)
  • Operating system: Mac
1 Like

Hi @IntriguedbyN8N,

To help diagnose this issue, could you please provide more details:

  1. When you say the inputs don’t appear, do you mean the input connectors are completely missing from the nodes, or they exist but data doesn’t flow through them?

  2. Could you share a screenshot of how the workflow looks in GCP CloudRun vs how it looks locally?

  3. What environment variables have you configured in CloudRun? Specifically, do you have N8N_HOST, WEBHOOK_URL, or N8N_EDITOR_BASE_URL set?

  4. Are you accessing the n8n editor through the same URL that’s configured in those environment variables?

  5. Do you see any errors in the browser console (F12 → Console tab) when loading the workflow?

This will help us understand if it’s a configuration issue, a frontend rendering problem, or something else.

Hi @IntriguedbyN8N

This is expected behavior when the n8n editor runs behind a proxy with a restrictive Content-Security-Policy (CSP).

The fix is to adjust or remove the CSP header in the Cloud Run environment or any upstream layer (such as a load balancer or reverse proxy).

Reference:

The issue has been resolved currently. We expanded CSP by adding:

+“imgSrc”:[“‘self’”,“data:”,“blob:”],*
+“scriptSrc”:[“‘self’”,“*”,“‘unsafe-eval’”],
+“scriptSrcAttr”:[“‘self’”,“‘unsafe-inline’”],

+“workerSrc”:[“‘self’”,“blob:”]

however those entries are not really safe in my opinion. Do you have any specific recommendations what to really keep in CSP when hosting n8n? I would like to follow some guideliness without opening too much.

Hi @rgrzesk

Hope you’re doing well.

What can be safely recommended is the following:

For the n8n domain, avoid using a generic and overly restrictive CSP inherited from other applications. If you already enforce a strong global CSP, create a specific exception or rule for the n8n host.

First, test the setup without any Content-Security-Policy to confirm that the issue disappears.
Then, if you want to reintroduce a CSP, do it incrementally, testing the editor after each change.

In cases reported by other users, working configurations typically used a very permissive CSP (for example frame-ancestors * or even a complete removal of the CSP) for n8n. There is currently no documented “minimum safe” CSP profile in the available sources.

Since there is no official documentation defining a “secure and supported” CSP for n8n, any stricter policy you choose to keep must be validated through testing in your own environment (loading the editor, opening AI nodes, connecting nodes, running workflows, etc.).