Enable State/Memory Management (this.getContext or $getWorkflowStaticData) in LangChain Custom Code Tools - AI Agent code tool

What is the feature? I would like the ability to persist temporary data across multiple executions of a LangChain Custom Code Tool (@n8n/n8n-nodes-langchain.toolCode) during a single Agent run. Specifically, exposing this.getContext('node') or $getWorkflowStaticData() inside the tool’s sandbox, just like in the standard Code node.

What is the use case? When building complex AI Agents, the agent might loop and call the same custom tool multiple times.

For example, I built a custom “Web Scraper” tool. Sometimes, the LLM hallucinates or loses track of its scratchpad and asks the tool to scrape the exact same URL it visited two iterations ago. If I could use this.getContext('node'), I could easily implement a local visitedUrls array inside the tool’s code. If the URL is already in the array, the tool immediately returns a fast error/cached response, saving time, bandwidth, and precious LLM tokens (especially when running local models like Ollama).

Currently, this throws: this.getContext is not a function.

Current Workarounds:

  1. Prompt Engineering: Telling the Agent in the System Prompt “Maintain an internal log of visited URLs and do not repeat them”. While helpful, LLMs are not deterministic and often fail at strict programmatic constraints.

  2. External Database (Overkill): Saving the state in an external DB (like Postgres or Redis) via HTTP requests inside the tool. This adds unnecessary latency, points of failure, and infrastructure complexity for a state that only needs to live for the 30-60 seconds of the Agent’s execution loop.

  3. File System: Writing to a local file, which requires altering Docker/host permissions and enabling NODE_FUNCTION_ALLOW_BUILTIN=fs.

Proposed Solution: Allow the Custom Code Tool to access a stateless/stateful memory object tied to the current execution or the specific item. Even a simple this.getToolContext() that resets after the Agent finishes its run would be a game changer for building deterministic, cost-effective AI loops.