Describe the problem/error/question
I’m using the AI Agent node in n8n with an AWS Bedrock LLM (I see underneath it uses Langchain’s ChatBedrockConverse). When the agent attempts to generate a response using streaming, it fails with an HTTP/2 error:
What is the error message (if any)?
http2 request did not get response
Please share your workflow
Share the output returned by the last node
Error as above http2 request failed.
Debugging:
After some debugging and writing my own isolated JavaScript implementation, I discovered that AWS Bedrock’s streaming support (via ChatBedrockConverse) requires a properly configured NodeHttpHandler with live keep-alive agents to handle HTTP/2 connections correctly.
// simple js file working code
import { BedrockRuntimeClient } from "@aws-sdk/client-bedrock-runtime";
import { NodeHttpHandler } from "@smithy/node-http-handler";
const client = new BedrockRuntimeClient({
region: credentials.region,
credentials: {
accessKeyId: credentials.accessKeyId,
secretAccessKey: credentials.secretAccessKey,
sessionToken: credentials.sessionToken,
},
requestHandler: new NodeHttpHandler({
httpAgent: { keepAlive: true },
httpsAgent: { keepAlive: true },
}),
maxAttempts: 3,
});
const llm = new ChatBedrockConverse({
model: mistral.mistral-large-2402-v1:0,
client: client,
verbose: false,
temperature: 0.2,
});
Upon checking the n8n source code, it seems that the current langchain/nodes/llms/LmChatAwsBedrock/LmChatAwsBedrock.node.ts implementation does not provide or expose a way to set the HTTP agent with keepAlive: true, which is necessary to support proper HTTP/2 streaming.
code path: packages/@n8n/nodes-langchain/nodes/llms/LmChatAwsBedrock/LmChatAwsBedrock.node.ts
This error is triggered when using the AWS Bedrock LLM within the AI Agent node during streaming completion.
No response is received. Non-streaming behavior sometimes works, but streaming consistently fails under the current configuration.
Question
Has anyone successfully used Langchain’s ChatBedrockConverse with streaming in n8n using AWS Bedrock?
Is there a workaround to inject a custom BedrockRuntimeClient into Langchain’s model class?
Any help or ideas would be appreciated!
Information on your n8n setup
- n8n version: 1.104.2
- Database: Default (SQLite)
- n8n EXECUTIONS_PROCESS setting: own
- Running n8n via: Docker
- Operating system: Linux (Docker)
