Why am i getting occasional raw output in chatbox from ai agent using Gemini flash as llm?

Describe the problem/error/question

I’m getting occassional raw ouputs in my chat window. Chat trigger, ai agent, gemini-flash-2.0 for llm, postgres memory, 3 tools reading/writing to google sheets, 2 pinecone vdb.

Most of the time the ai returns the usual expected messages but occassionally when i test edge case prompts, it returns raw output into the chatbox. See ‘error’ message below, but its not really an error…how do i fix this though.

What is the error message (if any)?

Error message (I get this as an actual reply in my chatbox):

[
{
“type”: “text”,
“text”: “I can help with that! I’ll pull some tips on how to make your follow-ups feel more natural.\n”
},
{
“functionCall”: {
“name”: “Sales_training”,
“args”: {
“input”: “How to make follow-ups feel more natural and less pushy?”
}
}
},
{
“functionCall”: {
“name”: “SAP_Tags”,
“args”: {
“input”: “follow-up”
}
}
}
]

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: Latest
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Render
  • Operating system: Mac

I think you might be missing a formatter/parser node to convert this response into a clean readable chat response. You mentioned that in normal cases it works fine and the reason could be that most of the time the LLM returns just text. But in edge prompts Gemini decides to trigger function calls too which we see here in your output “Sales_training”, “Sap_Tags” and due to that the system now doesnt know what to do with it

hmmm i added a node to parse the response to a clean readable response. So far so good, but once the context gets larger, the llm doesn’t seem to be calling any more tools and running into bad parameter error:400. Searched through the boards, this seems like an ongoing issue since last year. Will keep a lookout if there’s update/fix to this, but its back to openai for now…