Connect to aws cluster from n8n using mcp?

Hi,
I have n8n deployed in k8s cluster on AWS, using queue mode.
I managed to create workflows that connects to an AWS account through sts assume role and connect to an EKS cluster. the workflow returns the temporary token and the kubeconfig file as environment variables, which in turn i move forward to another workflow that is running kubectl command in my worker, thus i can run parallel executions that each can connect to a different account and cluster combination and will not interfere each other.

I tried replacing all this mechanism with kubernetes MCP server (tried several) with the mcp client community node (latest version). but as the node is using credentials to load the mcp server (stdio) i cannot dynamically set a path to a temporary kubeconfig file.

I thought maybe to find another way to initiate a temporary mcp server with http at the beginning of the workflow, with a temporary kubeconfig file

my questions are

  1. when invoking a sub workflows from a workflow, does all the executions runs on the same worker? or it may be pushed to a different one?
  2. is there maybe a way to scale up the workers or ‘spin’ one on demand (as workers are basically pods in my setup) per workflow?

Information on your n8n setup

  • n8n version: 1.92.2
  • Database (default: SQLite): postgress
  • n8n EXECUTIONS_PROCESS setting (default: own, main):queue mode
  • Running n8n via (Docker, npm, n8n cloud, desktop app): eks cluster
  • Operating system:
1 Like

hi itamar ,
i have the same issue , need to understand how Community handles this problem.