Issue with Dynamic LLM Expressions in OpenAI Agent Node (n8n v1.81.4 Local)

Hi everyone,
I’m experiencing a problem with my local installation of n8n (version 1.81.4) when creating an OpenAI agent using the “Message a Model” node—the only node I’ve found that returns output in JSON format. When I select an LLM from the OpenAI list, the node properly connects to the tools. However, if I use a dynamic LLM expression, the agent node loses its connection with the tools, preventing me from using any of them. I also tried the standard Agent Tools node, but it doesn’t allow “output to JSON.”
Any suggestions or workarounds would be greatly appreciated. Thanks for your help!

Information on your n8n setup

  • n8n version: 1.81.4
  • **Database (default: SQLite)
  • **n8n EXECUTIONS_PROCESS setting (default)
  • **Running n8n via (Docker)
  • Operating system: w11 24h2 26100.3476

Tools agent have the JSON output ability, you need to turn this switch on and then connect the output parser to the newly appeared bottom output:

Thank you for your suggestion. I understand that the Tools Agent node offers a structured output by enabling the “Require Specific Output Format” option and connecting an appropriate output parser.

However, my requirement is for an automatic output, similar to what the “Message a Model” node provides, because the LLM’s responses vary each time. The preformatted structure from the Tools Agent doesn’t accommodate the dynamic nature of the LLM’s outputs in my use case.

Additionally, I prefer not to add an extra LLM-based output parser to fix the response, as it increases both workflow execution time and computational costs.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.