AI Agent node is not displaying the expected tool, chat, and memory inputs and looks different

AI Agent Node Missing Inputs on Google Cloud VM (Works on Repocloud)

Issue Summary

My AI Agent node is not displaying the expected tool, chat, and memory inputs and looks different to what it should look like on self-hosted n8n instances running on Google Cloud VMs. The same workflow works perfectly on Repocloud.


Environment Details

:white_check_mark: Working Setup (Repocloud)

  • Platform: Repocloud hosted n8n

  • Agent Node: Shows all expected inputs (tool, chat, memory)

  • Status: Works perfectly :white_check_mark:

:cross_mark: Failing Setup (Google Cloud VM)

  • Platform: Google Cloud VM (Ubuntu 22.04 LTS)

  • VM Size: e2-medium (2 vCPU, 4GB RAM)

  • Docker: Latest version

  • n8n: Latest Docker image (n8nio/n8n:latest)

  • Agent Node: Missing tool/chat/memory inputs :cross_mark:

  • Other Nodes: All work perfectly (HTTP, triggers, etc.)

Docker Configuration


sudo docker run -d --restart unless-stopped -it \

--name n8n \

-p 5678:5678 \

-v n8n_data:/home/node/.n8n \

docker.n8n.io/n8nio/n8n

What I’ve Tried

  1. Created multiple fresh VMs from scratch - same issue

  2. Different Google Cloud regions - same issue

  3. Latest n8n Docker image - same issue

  4. Fresh installations with no data migration - same issue

  5. Standard nginx reverse proxy setup - working for everything else

  6. Verified networking - all other n8n functionality works

Specific Symptoms

  • Agent node shows red warning triangle

  • Missing inputs: No tool/chat/memory connection points visible

  • OpenAI Model node: Also shows red warning triangle

  • All other nodes: Work normally (webhooks, HTTP requests, triggers, etc.)

  • n8n interface: Loads and functions normally otherwise

Questions

  1. Environment Variables: Are there specific environment variables required for AI Agent functionality that Repocloud sets automatically?

  2. Memory Requirements: Does the Agent node require more memory than e2-medium provides? Repocloud might allocate more resources.

  3. Google Cloud Restrictions: Could Google Cloud be blocking specific AI API endpoints that the Agent node requires?

  4. Docker Configuration: Is there a specific Docker setup needed for AI features that’s different from standard n8n deployment?

Version Information

  • n8n Version: Latest (as of pulling n8nio/n8n:latest)

  • Docker Version: Latest

  • OS: Ubuntu 22.04 LTS on Google Cloud

  • Browser: Chrome (same browser works with Repocloud)

Additional Context

This is a very specific issue where only the AI Agent node fails while everything else works perfectly. The exact same workflow template works on Repocloud but fails on self-hosted Google Cloud instances.

I’ve been debugging this for an entire day and am stuck. Any insights into what might be different between Repocloud’s n8n setup and a standard Docker deployment would be greatly appreciated.

Has anyone successfully deployed AI Agent nodes on Google Cloud VMs? What configuration did you use?

Hi, it should work without any configuration.

Could you try to paste this into your workflow.

Yeah it used to work without any configuration. I tried copy pasting it but it pastes as the same weird agent component. I tried importing from files, from urls, same issue. It’s very strange, tbh it doesn’t make any sense that everything else works except for this issue.

I see, what guide did you use to install on GCloud? This one is the best (works with any providers.)

Yeah I know about digital ocean and other options. I have it running on repocloud. However, i’d like to understand what’s going on and fix it. I’m using a standard docker -Docker | n8n Docs- container on a debian VM instance on Google Cloud. (NOT using the GKE (Kubernetes) tutorial on n8n docs site).

AI Agent Node Missing Inputs - Same Issue Here

I’m experiencing the exact same problem on a self-hosted n8n instance. Your description matches my situation perfectly!

My Environment Details

Platform: Self-hosted VPS (Ubuntu 22.04 LTS)
VM Size: 8 vCPU, 32GB RAM (so memory isn’t the issue)
Docker: Latest version with Docker Compose
n8n: Version 1.94.1 (n8nio/n8n:latest)
Database: PostgreSQL 15 (not SQLite)

Identical Symptoms

  • :white_check_mark: AI Agent node exists in node selector
  • :white_check_mark: n8n interface fully functional otherwise

Key Discovery: Timeline Issue

Most importantly: The AI Agent node worked initially with all inputs (tool/chat/memory) on the same n8n version, but the inputs disappeared after configuration changes during development. This suggests it’s not a version issue but a configuration/environment variable problem.

What I’ve Tried

  1. Multiple Docker image pulls - same issue persists
  2. Browser cache clearing and incognito mode - no change
  3. Complete “clean reset” to minimal configuration - inputs still missing
  4. Database analysis - found empty settings table (suspicious)
  5. Version verification - confirmed n8n 1.94.1 supports AI Agent (requires 1.19.4+)

Environment Variables Theory

Based on the Repocloud vs self-hosted difference, I suspect Repocloud automatically sets environment variables that enable AI Agent inputs, such as:

N8N_COMMUNITY_PACKAGES_ENABLED: “true”
N8N_LANGCHAIN_ENABLED: “true” # Unverified
N8N_AI_ENABLED: “true” # Unverified

However, I couldn’t find these in official n8n documentation, so they might be internal feature flags.

Questions for Community

  1. Has anyone successfully restored AI Agent inputs on self-hosted after they disappeared?
  2. What’s in your database settings table if AI Agent works properly?
  3. Are there undocumented environment variables that Repocloud might be setting?
  4. Could this be related to community packages registration in the database?

This seems like a systematic issue affecting multiple self-hosted deployments. I’m happy to test any suggested solutions and report back results!

1 Like

I just completed a complete fresh installation of n8n to eliminate any configuration conflicts, but unfortunately the AI Agent inputs issue persists.

What I Did

  • Complete data wipe: Removed all Docker volumes (docker volume rm n8n-setup_n8n_data n8n-setup_postgres_data)
  • Fresh docker-compose.yml: Clean, minimal configuration with only essential variables
  • Latest n8n version: Upgraded from 1.94.1 to 1.95.3 (latest available)
  • Clean database: Fresh PostgreSQL with all migrations completed successfully
  • Preserved networking: Ollama AI integration working perfectly

Current Status

  • :white_check_mark: n8n 1.95.3: Latest version running perfectly
  • :white_check_mark: All core functionality: Workflows, database, HTTPS, authentication working
  • :white_check_mark: AI integration: Ollama qwen3:30b-a3b model accessible via Docker networking
  • :cross_mark: AI Agent inputs: Still missing tool/chat/memory connection points (same issue)

Key Discovery

Fresh installation with latest n8n 1.95.3 eliminates these potential causes:

  • :cross_mark: Not a configuration issue (clean install)
  • :cross_mark: Not data corruption (fresh database)
  • :cross_mark: Not version compatibility (latest available)
1 Like

I’m glad to know i’m not the only one having this issue, then this confirms this issue is not specific from my setup and deployment process. And yeah i’ve also tried starting from scratch and the issue persists. I agree on this:

  • :cross_mark: Not a configuration issue (clean install)
  • :cross_mark: Not data corruption (fresh database)
  • :cross_mark: Not version compatibility (latest available)

It’s a very strange bug that could be worth some research. What other options are there? Is it a bug on n8n’s last version’s code? I might try deploying an older version so to check if that produces any different results.

It’s strange. Can you try downgrading to version 1.93.0

Also @Lukasz_Schab how are you deploying n8n?

1 Like

I’m having this exact same issue.

I started on an ubuntu arm64 server with hetzner as my vps. I then rebuilt my server using debian x86, the same issue still persists.

My Setup:

  • n8n Version: 1.95.3 (resolves from n8nio/n8n:latest)
  • Host: Hetzner Cloud VM
  • OS: Debian (x86_64)
  • Deployment: Docker Compose (build: . from n8nio/n8n:latest)
  • Proxy: Nginx (working HTTPS)
  • Database: PostgreSQL (confirmed connected)

I’ve gone through a huge amount of troubleshooting, including:

  • Complete OS reinstallation (from Ubuntu ARM64 to Debian x86_64).
  • Forcing local Docker image builds (build: . from n8nio/n8n:latest).
  • Hardcoding all database connection details (N8N_DB_CONNECTION_URL, extra_hosts) to bypass IPv6 and DNS issues.
  • Disabling IPv6 for the Docker network.
  • Ensuring all Nginx configs, environment variables (N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true, N8N_DIAGNOSTICS_ENABLED=true, N8N_PROXY_HOPS=1, WEBHOOK_URL, N8N_EDITOR_BASE_URL), and Docker setups are correct.

I downgraded and this did not help either :frowning:

:bullseye: SOLUTION FOUND: Content Security Policy Issue?

After a LOT of tesitng xD I think the missing AI Agent inputs are caused by Content Security Policy blocking JavaScript features AI Agent needs. Try temporarily commenting out the Content-Security-Policy line, and test - if the inputs appear, that’s our culprit.

After that replace it with an AI-friendly CSP that allows inputs, and don’t break other security stuff xD

btw. Has anyone found other security settings that break n8n features in unexpected ways? What to look for?

1 Like