Agent Node Continues to Add Empty data to any data store [causing downstream errors]

Currently traveling, so I may have not done enough due diligence as I normally do.

But:

Using the AI Agent node, gemini flash 2.0 (and others), using postgresql memory (or Simple), plus a few tool calls. The agent will consistently insert a message object without a “content” field into the memory tool used.

This causes downstream errors, when the agent crafts the LLM request with the previously stored context, about how you cannot have an empty content object when sending a message (as the communication chain).

I have enforced to the model to never store or output empty content, used different memory (just in case), and tried a fair few other ideas.

I cannot seem to fix this. I presume it is related to the llm providing an empty response for some reason, but checking the logs does not seem like it does. I have a feeling its a memory storage bug. I also tried searching the forums, but wasn’t able to find with the keywords I used.

I feel like an easy fix for this is to always store the content key, even if the model outputs nothing, or even if memory parsing storage fails.

You can see this here:

I can easily be missing something, just curious if anyone else encountered and were able to fix in some way. I am fine to move to gpt-4.1, as I was only after the context window of gemini models (if related).

My prompt also contains instructions to prevent formatting of input fields with extra quotes, which fixed 2 errors related to printing tool calls and adding extra quotes for inputs in tool calls.

I am on the latest stable version (I believe). I just pulled before I worked on this, yesterday morning. Self hosted, docker compose, 1.89.2.

This isn’t urgent for me, as I just recreated the entire workflow in a more classic logical automation with AI calls for sorting, which works flawlessly.

Would just like to document if this is a bug, or if there is something I am missing to help solve in similar future circumstances.

1 Like