[Bug] Connection error in AI Agent node after upgrading from n8n v2.2.4 to v2.11.4 (Docker)

[Bug] Connection error in AI Agent node after upgrading from n8n v2.2.4 to v2.11.4 (Docker)

Describe the problem/error/question

After upgrading n8n from version 2.2.4 to 2.11.4 using Docker, my AI Agent node throws a connection error when using the OpenRouter Chat Model. The workflow was working perfectly fine before the upgrade.

What is the error message (if any)?

Connection error.
The expression evaluated to a falsy value: assert(webidl.is.ReadableStream(stream))

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

[
  {
    "headers": {
      "connection": "upgrade",
      "host": "n8n.grow.salon",
      "content-length": "68",
      "content-type": "application/json",
      "user-agent": "PostmanRuntime/7.49.1",
      "accept": "*/*",
      "postman-token": "5cad0ea6-74f2-48d5-a1f2-043cad6ef9b1",
      "accept-encoding": "gzip, deflate, br"
    },
    "params": {},
    "query": {},
    "body": {
      "context": "Golang Slice VS Array",
      "number_hashtags": 5
    },
    "webhookUrl": "http://localhost:5678/webhook-test/grow-socials/linkedin/hashtags-generator",
    "executionMode": "test",
    "error": "Connection error."
  }
]

Information on your n8n setup

  • n8n version: 2.11.4
  • Database (default: SQLite): postgreSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Ubuntu

That error usually points to a Node or undici stream mismatch after the upgrade, not to the prompt itself.

assert(webidl.is.ReadableStream(stream)) shows up when the AI node expects a web ReadableStream, but the runtime inside the container is handing back something incompatible. On Docker upgrades, the first things I would check are:

  1. Rebuild the image cleanly, do not reuse old layers or stale custom packages.
  2. Make sure every n8n container is on the exact same version if you run main plus workers. Mixed 2.2.x and 2.11.x components can cause weird node runtime behavior.
  3. If you mounted anything under .n8n or custom nodes, temporarily test with a clean volume or clean container to rule out leftover package state.
  4. In the AI Agent path, try turning off streaming if the OpenRouter chat model supports both streaming and non-streaming. A few provider wrappers break exactly at the stream layer after upgrades.

Because this worked before 2.11.4, I would start by isolating whether it is stale container state, mixed versions across containers, or OpenRouter streaming compatibility with the newer AI Agent implementation.

If you post the exact Docker image tag plus whether you use workers or queue mode, that should narrow it down pretty fast.

Hey @fachri , welcome to the n8n community !

my first check would be to swap the OpenRouter Chat Model for one of the models explicitly documented for the AI Agent; if that works, the issue is likely the OpenRouter/agent integration path rather than your workflow itself. The ReadableStream assertion also makes this look like a streaming/client compatibility issue after the upgrade, not a normal prompt or credential error.

If you can, share the workflow JSON and test the same prompt with a supported chat model in the same agent, because that should confirm very quickly whether this is a real OpenRouter compatibility bug in 2.11.4.

the assert(webidl.is.ReadableStream(stream)) error is a streaming compatibility issue introduced in the new task runner architecture in 2.11.x. OpenRouter uses streaming by default, and the runner’s internal stream handling changed between versions.

two things to try in your docker-compose:

1. add N8N_RUNNERS_ENABLED=true explicitly — without it the runner can land in a broken hybrid state on fresh upgrades where it tries to handle streaming but can't initialize the ReadableStream properly.

2. if that doesn’t fix it, add this env var to disable streaming for LangChain nodes:


N8N_LANGCHAIN_STREAMING=false
```

this forces n8n to wait for the full response instead of streaming, which sidesteps the ReadableStream issue entirely. slight latency tradeoff but much more stable.

if neither works, share your docker-compose and confirm whether you're using `n8nio/n8n:2.11.4` or a custom image — the base image matters here.