Output data while in conversation w/ AI Agent

How can I output data from an AI Agent that isn’t retrieved back by the Agent to the conversation?

Basically I want to order something, and at the end of the chat send a json with the order data.

The AI Agent tends to send both the closing message and the json together, so I split it with a Code node, but the Agent is also retrieving the output of the Code node :confused:

Am I forced to use tool sub-nodes in this situation?

This is the setup:

Information on your n8n setup

  • n8n version: 1.90.1
  • Database (default: SQLite): postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: ubuntu 24.04

Hi,

It’s difficult to say something from only a screenshot.

Can you just make a simple test and use a set node with 1 field “output” set to “hello” (right after your code node) Does the AI agent reply with hello only all the time?

If this works I suggest you try a basic LLM with a structured output parser to separate and structure the ordering information as well as the prompt output. This way you would force which is which.

Reg,
J.

Thanks. Yes, the bot will reply with whatever is in the end output. So what I did was to plug the “Comanda” Code node to the last Edit Base node.

The Edit Base node actually is only retrieving the first code node where the split between answer and json is happening, so it only responds with the message and maybe (I guess) I can branch out the Comanda node to an HTTP Request node for the json, or maybe put the HTTP node in place between the Code and Edit Base nodes…

Basically just asking how this situation is usually handled, for getting the data out-of-the-loop.

For this example the KB is short, a simple restaurant menu, so it’s included in the system prompt.

Hi,

did you try to add an output parser, to force the JSON schema on the agent?

The Output Parser will work for every reply of the bot right?
I thought on adding a parser on a following agent, but I’m trying to keep things simple.

In any case the bot is not vey deterministic for the listed items even at temp 0.0, so I will regardless add a second agent in the loop.

Hi, well I guess so. You could try to split with logic before but then again go again on nom determinitic territory

You could try with something like categorising the operation and take different logic and output from there.

Temperature to my tests has very little effect on response and tool calling. Imho

Reg
J.