Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
Operating system: Ubuntu
I’ve been working on Cole Medin’s example and I’m running into some difficulties regarding the output from the Supabase Vector Store dependent on the operation mode.
Here we have a working version where we set the Supabase Vector Store to Retrieve Documents (As Vector Store for Chain Tool)
When testing the workflow the response from the Supabase Vector store shows as:
We can see the response is well formatted for our first document and the metadata is clearly split out.
In the second version we set the Supabase Vector Store to Retrieve Documents (As Tool for AI agent) we also choose to check the ‘include metadata’ flag (although this being unselected doesn’t resolve the issue)
When testing the workflow it fails fast with “Error: Non string tool message content is not supported at convertToolMessageToOllama”.
When we examine the Supabase Vector Stores response:
We can see from the second version a new key has been added: ‘type’ and our well formatted response from the other workflow has been dropped in a single key ‘text’.
Is this expected behaviour? Is this just the way that the AI agent requires a separate format to the Chain/Tool. If so, is there away around this for it to interpret it? Could it be resolved with an Output Parser some how, or am I completely missing the point.
Any enlightenment greatly appreciated on how to perhaps resolve this error.
NB ollama is hosting mistral-small:24b-instruct-2501-q8_0
I tried switching to the qwen2.5:32b model and got a similar failure to mistral model.
I then tried switching the model to ishumilin/deepseek-r1-coder-tools:8b in ollama. With this model the workflow completes although doesn’t pull any information from the Supabase Vector Store.
Possibly the models served in ollama don’t have the requisite tools for this interaction.
It is frustrating the Vector Store Tool can’t handle the metadata as at least in this version it’s getting it returned from the Vector Store before processing and the workflow completes.
Final update.
Using an old method suggested by @Jim_Le in post Get metadata from Vector Store Tool I was able to construct a workflow that works with the mistral model and allows answers to include my metadata information in the answer.
Hey @iam_rob
Thanks for posting this! Just reading through your issue, it looks like the main difference is that the Supabase vector store tool returns an object response instead of a string.
the vector store tool uses a middleman LLM to summarise vector store documents before they reach the agent.
the subworkflow solution, you do the conversion toJsonString()
This isn’t necessarily wrong - I feel object responses as equally valid! This is just down to the particular Langchain code that converts tool responses for Ollama not handling objects - by design most probably.
Not sure where you’d post this bug… feels like you should really take it up with Langchain but ultimately, should n8n patch this until they fix it on their end?
Just as another update Cole actually dropped a new video today (https://www.youtube.com/watch?v=T2QWhXpnT5I) and he managed to resolve the [Non string tool message content is not supported] error.
You can switch out the ollama chat model node with the OpenAI chat model node referencing the ollama instance and it now works.