Community node invoking LangChain Sub-nodes - Can it work?

I am currently building a node package (n8n-nodes-payi) that routes LLM requests through our platform for tracking AI usage. We’ve submitted it for n8n Cloud review and have recieved helpful feedback, but I am stuck on one issue.

This package includes provider-specific chat model nodes that output AiLanguageModel connections for use in AI Agent workflows. These nodes use ChatBedrockConverse, ChatOpenAI, and ChatAnthropic from the @langchain/* packages — the same classes n8n’s built-in AI nodes use — plus @n8n/ai-utilities for tracing.

The scanner flags these as HIGH issues because community nodes can’t require() external packages in the Cloud sandbox, which makes sense from a security perspective.

How it works is if a customer kicks off a workflow, a single HTTP proxy node that handles all providers (OpenAI, Anthropic, Azure, Bedrock, Databricks) via standard HTTP requests to our proxy service on the platform. This passes all scanner checks and covers the core use case.

But specifically I am trying to understand For the LangChain-based nodes that integrate natively into AI Agent workflows, and i have some questions based on the feedback from the review process:

  1. Is there a supported method for community nodes to provide AiLanguageModel sub-nodes that plug into the AI agent chain?
  2. Is there a process to allow packages to pass the security review that only require()modules already present within n8n’s run time (example: @langchain/openai is already loaded by n8n’s built in nodes)?
  3. Are there any planned changes to the community node sandbox that could enable this?

I understand and appreciate the focus on security constraints. The HTTP proxy approach works “okay” but using the LangChain sub-node pattern gives users a much better experience.

I appreciate any guidance!

welcome to the n8n community @swharr
From the current n8n docs, I don’t see an officially supported path for community nodes to expose AiLanguageModel sub-nodes that plug into the AI Agent chain in the same way as built-in LangChain nodes. The docs describe cluster nodes and sub-nodes, but not this as a documented community-node extension pattern.
I also don’t see a documented exception process for packages that require() modules already present in the n8n runtime. The verification guidelines explicitly say community node packages should have no external dependencies, so based on the docs I would not assume shared runtime modules are allowed for verification. (

I haven’t found any documented roadmap item about changing the community-node sandbox to enable this either. So for now, the HTTP/proxy approach still seems like the safer documented path, even if the LangChain sub-node UX is better. (

Docs for reference:

Community node verification guidelines Verification guidelines | n8n Docs
Building community nodes Building community nodes | n8n Docs
Cluster nodes / Sub-nodes Cluster nodes | n8n Docs
OpenAI Chat Model node OpenAI Chat Model node documentation | n8n Docs

1 Like

The sandbox constraint is real and intentional — runtime isolation means shared runtime modules arent accessible even if theyre already loaded. The practical path most builders land on is wrapping proxy calls as tool nodes in the AI Agent workflow instead of native LangChain sub-nodes. Less seamless UX-wise but fully sandbox-compatible and works in practice. Opening a GitHub feature request with your specific use case is probably the best lever here, the team does adjust sandbox rules when theres a clear concrete need.

The issue isn’t your implementation it’s a platform limitation. In n8n Cloud, community nodes cannot use require() for external packages (including @langchain/*), even if those packages exist internally in n8n. Because of this, creating LangChain-based AiLanguageModel sub-nodes from a community package is currently not supported.


Working solution

To move forward and pass the review:

  • Drop the LangChain-based nodes from your community package
  • Keep your HTTP proxy node as the core implementation
  • Route all LLM requests (OpenAI, Anthropic, Azure, Bedrock, etc.) through your proxy via standard HTTP

For AI Agent compatibility

Since custom AiLanguageModel sub-nodes are not supported:

  • Use your proxy node before the Agent (to fetch/generate responses)
  • Or replace Agent usage with a manual orchestration pattern (HTTP → logic → response)
  • Optionally, expose your proxy as a generic “LLM endpoint” that works with existing nodes

Key limitation

  • There is currently no supported way for community nodes to plug directly into the AI Agent chain using LangChain classes
  • Reusing internal packages (like @langchain/openai) is blocked by the sandbox
  • This is a security design decision and not configurable

Best path forward

Focus on:

  • HTTP-based architecture (proxy)
  • Compatibility with standard nodes
  • Avoiding LangChain dependencies in your package
1 Like

Thank you,@Benjamin_Behrens and @akingbade-Samuel for the detailed feedback and write up. This was super helpful, and has given me an idea of how to proceed

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.