Tool output invisible to agent (native agent node + verified community node)

Describe the problem/error/question

Short: Tool output never “reaches” the agent.

Long:
There seems to be a difference between using a node manually and as tool:

  • When I use a certain node manually, it returns the values I expect
  • When I use the same node as a tool (and AI provides the exact same value as I provide manually), the results aren’t returned to the agent

Issue is clearly visible in the “last output” (see below): the “observation” is empty. I’ve tried investigating this further, but I can’t seem to expose any relevant data. It appears that the complex JSON returned by the tool simply never “reaches” the agent.

Screenshots:
Overview:

Agent log:

Tool log:

Manual usage DOES work as expected:

The issue appears to be the exact same as in this (closed) topic:

What is the error message (if any)?

None; the agent “thinks” all works as expected.

Please share your workflow

Share the output returned by the last node

[
  {
    "output": "It appears that there were no valid term-values found for \"hubspot\" in the database. This may suggest a spelling variation, absence in the source, or a temporary issue. If you have an alternative spelling or a similar term to search for, please let me know!",
    "intermediateSteps": [
      {
        "action": {
          "tool": "Get_parameter_suggestions_in_Bedrijfsdata",
          "toolInput": {
            "Search_Query": "hubspot"
          },
          "toolCallId": "call_R74WsDlokcKe053esDm2OSqG",
          "log": "Invoking \"Get_parameter_suggestions_in_Bedrijfsdata\" with {\"Search_Query\":\"hubspot\"}\n",
          "messageLog": [
            {
              "lc": 1,
              "type": "constructor",
              "id": [
                "langchain_core",
                "messages",
                "AIMessageChunk"
              ],
              "kwargs": {
                "content": "",
                "additional_kwargs": {
                  "tool_calls": [
                    {
                      "index": 0,
                      "id": "call_R74WsDlokcKe053esDm2OSqG",
                      "type": "function",
                      "function": {
                        "name": "Get_parameter_suggestions_in_Bedrijfsdata",
                        "arguments": "{\"Search_Query\":\"hubspot\"}"
                      }
                    }
                  ]
                },
                "response_metadata": {
                  "prompt": 0,
                  "completion": 0,
                  "usage": {
                    "prompt_tokens": 484,
                    "completion_tokens": 24,
                    "total_tokens": 508,
                    "prompt_tokens_details": {
                      "cached_tokens": 0,
                      "audio_tokens": 0
                    },
                    "completion_tokens_details": {
                      "reasoning_tokens": 0,
                      "audio_tokens": 0,
                      "accepted_prediction_tokens": 0,
                      "rejected_prediction_tokens": 0
                    }
                  },
                  "finish_reason": "tool_calls",
                  "system_fingerprint": "fp_51e1070cf2",
                  "model_name": "gpt-4.1-2025-04-14",
                  "service_tier": "default"
                },
                "tool_call_chunks": [
                  {
                    "name": "Get_parameter_suggestions_in_Bedrijfsdata",
                    "args": "{\"Search_Query\":\"hubspot\"}",
                    "id": "call_R74WsDlokcKe053esDm2OSqG",
                    "index": 0,
                    "type": "tool_call_chunk"
                  }
                ],
                "id": "chatcmpl-BwR31HsrTKYUpQ2AX3Lvbrp2Bcmsx",
                "usage_metadata": {
                  "input_tokens": 484,
                  "output_tokens": 24,
                  "total_tokens": 508,
                  "input_token_details": {
                    "audio": 0,
                    "cache_read": 0
                  },
                  "output_token_details": {
                    "audio": 0,
                    "reasoning": 0
                  }
                },
                "tool_calls": [
                  {
                    "name": "Get_parameter_suggestions_in_Bedrijfsdata",
                    "args": {
                      "Search_Query": "hubspot"
                    },
                    "id": "call_R74WsDlokcKe053esDm2OSqG",
                    "type": "tool_call"
                  }
                ],
                "invalid_tool_calls": []
              }
            }
          ]
        },
        "observation": ""
      }
    ]
  }
]

Information on your n8n setup

  • n8n cloud
  • n8n version: 1.102.4
  • Database: default
  • n8n EXECUTIONS_PROCESS setting: default
  • Running n8n via n8n cloud

Any ideas anyone?

I’m facing the same problem! I need to send the agent’s output to a tool, in my case I’m creating a team of agents where each one will have a behavior, permissions, tools, and I need to send the agent’s output to tools so that the next sector/agent is already contextualized with the service transfer. However, analyzing the documentation/logs, the flow for calling the tool is made before the agent’s output, meaning that the data is not can be instantiated. This is a big difficulty I’m having in sending the data to the other flow via tool/pots!

This link will help you understand the data flow, act accordingly.

1 Like

In my case I need to use the Structure Output Parser output within the tool, due to the order of executions, the “transfer to” node executes first than the agent’s structured output, so I cannot send the contextualization of the first agent to the next flow.

A way out of this problem is to try to handle all post sending to the next one (sector) after the first agent leaves. This way, I would be able to send all the data that was executed/processed after exit, but the idea would be to send all of this at the agent/tool’s execution time.

In the pink highlight, notice that the tool is executed before the agent exits, which makes it impossible to exit with all the necessary data for the next agent.

The log shows the post type tool, which sends all the data (except the agent output) to the other (commercial) webhook. The objective is to contextualize everything so that the next agent can have a continuous flow of service.

Enable the “Return Intermediate Steps “ in every AI Agent, and each of them beside the output will Return an object IntermediateSteps that contains ``observation`` which will have as infor the output + IntermediateSteps from the executed tool(if Is AI Agent with same settings)… is kind of a loop if you use subworkflows.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.