MCP Community node not working as Agent tool

I recently downloaded the new MCP community node.
I have used the node successfully (Brave MCP). I input all the appropriate credentials and everything, just like the example in the community node documentation shows, and it works great.

HOWEVER, whenever I try to use it as an agent tool I see this error -

Problem running workflow

Unrecognized node type: n8n-nodes-mcp.mcpClientTool

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.82.3
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: windows 11

it sounds like you might need to update your docker compose file to allow community nodes as tools. Please see this video, around the 26 minute mark.

3 Likes

read: GitHub - nerding-io/n8n-nodes-mcp: n8n custom node for MCP

Super appreciate that advice, but I’ve done this.
I have the environment variable in my system and in my n8n files.
I’ve confirmed it comes back as true through my terminal as well.

1 Like

@Anthony_Lee Did you redeploy n8n instance?

Yes. I even uninstalled and reinstalled.

Hi Anthony_Lee, I am getting the same problem and have done everything you did too. This happens on both localhost and hosted server for me. Just wondering if you have found a solution. Thanks in advance

1 Like

Please make sure that you are on the latest version, and N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE env variable is actually set to true.

Please also try to add the MCP node as a regular node. If the package is installed correctly, even without the env variable, you should still be able to see the regular node. If you don’t see the node, then the package isn’t installed correctly.

I have done all of this. env variable is true, latest version of n8n is installed, and the regular node WORKS. just the tool is not.

1 Like

I’ve tried a few things, and have not been able to reproduce this so far. Looking at the code, the only possibilities seem to be that the env variable isn’t actually reaching the code that checks for it.

Can you try running this workflow to check if the “Execute Command” node can read the env variable or not:

The output should look something like this

4 Likes

thanks this helps me :slight_smile:

I appear to have the same issue occurring. The output of this command is 0, empty, empty.

SOLVED: Works for me now: Steps I took

How We Fixed the n8n and MCP Node Issues

Problem Summary

  • n8n was installed but broken due to conflicts between the MCP node and dependency issues
  • The error “Unrecognized node type: n8n-nodes-mcp.mcpClientTool” persisted despite node visibility in UI
  • Conflicting Node.js versions (v23.10.0 via Homebrew vs v18.17.0 via NVM) caused compatibility issues

Solution Steps

  1. Completely removed n8n and related files (will remove all workflows):

    npm uninstall -g n8n
    rm -rf /opt/homebrew/lib/node_modules/n8n
    rm -rf /opt/homebrew/bin/n8n
    rm -rf ~/.n8n
    # Removed all cached and related n8n directories
    npm cache clean --force
    
  2. Switched to a compatible Node.js version:

    nvm use v18.17.0
    
  3. Reinstalled n8n with correct Node.js version:

    npm install -g n8n
    
  4. Set required environment variable for community packages:

    echo 'export N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true' >> ~/.zshrc
    source ~/.zshrc
    mkdir -p ~/.n8n
    echo 'N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true' > ~/.n8n/.env
    
  5. Ensured correct Node.js version is used to run n8n:

    ~/.nvm/versions/node/v18.17.0/bin/node ~/.nvm/versions/node/v18.17.0/bin/n8n start
    
  6. Made the configuration persistent:

    echo 'export PATH="$HOME/.nvm/versions/node/v18.17.0/bin:$PATH"' >> ~/.zshrc
    source ~/.zshrc
    

The key issues were ensuring n8n ran on a compatible Node.js version (v18.17.0) and enabling the community packages tool usage flag. This combination allowed the MCP nodes to function correctly.

For anyone who gets 0, empty, empty instead of 0, empty, true, your configuration isn’t passing in the env variable for some reason.

and n8n can’t really fix this for you. The configuration outside n8n itself needs to be fixed.

2 Likes

I have the exact same problem as @Anthony_Lee.

Thanks to @netroy I was able to determine that my configuration isn’t passing in the env variable because I’m getting 0, empty, empty.

I tried @ade_RiverIsland 's suggestion with the Node.js versions, but that did not fix the problem.

Anyone else have solutions that worked for them? Have the latest version of n8n on my machine in a Docker container. Docker-compose file has correct env variable, etc. Any ideas as to what configurations outside of n8n may be causing this?
Thanks in advance for any ideas! :smiley:

can you please share the docker-compose file (with any secrets redacted)?
or maybe try asking one of the LLMs if they see any obvious issues in your stack.

1 Like

I have the same issue.
I installed N8N on top of Heltzner+Coolify, and I’m not tech-savvy, so I don’t distinguish between Node.js and Docker… yet. Meaning, I need more explained instructions, without coding-skilled assumptions.

My env. variable adde:

Here is my docker-compose file. Thanks for any help you can give! I will also ask an LLM. :blush:

volumes:
  n8n_storage:
  postgres_storage:
  ollama_storage:
  qdrant_storage:

networks:
  demo:

x-n8n: &service-n8n
  image: n8nio/n8n:latest
  networks: ['demo']
  environment:
    - DB_TYPE=postgresdb
    - DB_POSTGRESDB_HOST=postgres
    - DB_POSTGRESDB_USER=${POSTGRES_USER}
    - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
    - N8N_DIAGNOSTICS_ENABLED=false
    - N8N_PERSONALIZATION_ENABLED=false
    - N8N_ENCRYPTION_KEY
    - N8N_USER_MANAGEMENT_JWT_SECRET
    - OLLAMA_HOST=host.docker.internal:11434
    - N8N_RUNNERS_ENABLED=true
    - N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true

x-ollama: &service-ollama
  image: ollama/ollama:latest
  container_name: ollama
  networks: ['demo']
  restart: unless-stopped
  ports:
    - 11434:11434
  volumes:
    - ollama_storage:/root/.ollama

x-init-ollama: &init-ollama
  image: ollama/ollama:latest
  networks: ['demo']
  container_name: ollama-pull-llama
  volumes:
    - ollama_storage:/root/.ollama
  entrypoint: /bin/sh
  environment:
    - OLLAMA_HOST=ollama:11434
  command:
    - "-c"
    - "sleep 3; ollama pull llama3.2"

services:
  postgres:
    image: postgres:16-alpine
    hostname: postgres
    networks: ['demo']
    restart: unless-stopped
    environment:
      - POSTGRES_USER
      - POSTGRES_PASSWORD
      - POSTGRES_DB
    volumes:
      - postgres_storage:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
      interval: 5s
      timeout: 5s
      retries: 10

  n8n-import:
    <<: *service-n8n
    hostname: n8n-import
    container_name: n8n-import
    entrypoint: /bin/sh
    command:
      - "-c"
      - "n8n import:credentials --separate --input=/backup/credentials && n8n import:workflow --separate --input=/backup/workflows"
    volumes:
      - ./n8n/backup:/backup
    depends_on:
      postgres:
        condition: service_healthy

  n8n:
    <<: *service-n8n
    hostname: n8n
    container_name: n8n
    restart: unless-stopped
    ports:
      - 5678:5678
    environment:
      - WEBHOOK_URL=https://your-domain.com/         # Replace with your tunnel domain
      - N8N_HOST=your-domain.com                     # The hostname Cloudflare will use
      - N8N_PROTOCOL=https                         # Ensures secure webhooks
    volumes:
      - n8n_storage:/home/node/.n8n
      - ./n8n/backup:/backup
      - ./shared:/data/shared
    depends_on:
      postgres:
        condition: service_healthy
      n8n-import:
        condition: service_completed_successfully

  qdrant:
    image: qdrant/qdrant
    hostname: qdrant
    container_name: qdrant
    networks: ['demo']
    restart: unless-stopped
    ports:
      - 6333:6333
    volumes:
      - qdrant_storage:/qdrant/storage

  ollama-cpu:
    profiles: ["cpu"]
    <<: *service-ollama

  ollama-gpu:
    profiles: ["gpu-nvidia"]
    <<: *service-ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

  ollama-gpu-amd:
    profiles: ["gpu-amd"]
    <<: *service-ollama
    image: ollama/ollama:rocm
    devices:
      - "/dev/kfd"
      - "/dev/dri"

  ollama-pull-llama-cpu:
    profiles: ["cpu"]
    <<: *init-ollama
    depends_on:
      - ollama-cpu

  ollama-pull-llama-gpu:
    profiles: ["gpu-nvidia"]
    <<: *init-ollama
    depends_on:
      - ollama-gpu

  ollama-pull-llama-gpu-amd:
    profiles: [gpu-amd]
    <<: *init-ollama
    image: ollama/ollama:rocm
    depends_on:
     - ollama-gpu-amd

  cloudflared:
    container_name: cloudflared
    image: cloudflare/cloudflared:latest
    restart: unless-stopped
    user: nonroot
    entrypoint:
      - cloudflared
      - --no-autoupdate
    command:
      - tunnel
      - --no-autoupdate
      - run
      - --token
      - ${CLOUDFLARE_TUNNEL_TOKEN}    # Replace with your Cloudflare tunnel token
    networks:
      - demo ```

@yo-yo.eco This bit is the issue. When you add environment: to the n8n service, it’s overwriting the environment: in the x-n8n: &service-n8n block above.
either move these variable to the block above, or drop the yaml anchors, and duplicate the config.

You could also try changing to this to

environment:
      <<: *service-n8n.environment
      - WEBHOOK_URL=https://your-domain.com/         # Replace with your tunnel domain
      - N8N_HOST=your-domain.com                     # The hostname Cloudflare will use
      - N8N_PROTOCOL=https  

I haven’t tested it. so it might not work.

1 Like

@davdelven That looks like Coolify isn’t passing the env variables for you. can you try adding the env variable in the environment: block in the third screenshot?

1 Like