Getting '[ERROR: Non string tool message content is not supported]' When using agent and ollama to access Qdrant

So this issue has been bugging me for a while. This seems to happen on all ollama models but not openai. Is there an option i need to configure? as this makes using tools within agents/ollama unusable.

Heres the workflow { "nodes": [ { "parameters": { "promptType": "define", "text": "=Date: {{ $now }}\n\nUser Prompt: {{ $json.chatInput }}", "hasOutputParser": true, "options": { "systemMessage": "You have access to two long term memory tools (different to the conversation history), you can store and retrieve long term memories. always retrieve memories from the qdrant vector store (Access Memory Tool) to see if they add any context that could be useful in aiding you to respond to the users query.\n\nIMPORTANT: Do not create long term memories on every user prompt, always determine if its something worth remembering.\n\nCreate a long term memory using the memory tool whenever you believe that something the user said is worth remembering, for example 'i don't like cheese' or 'i'm size 8 shoes' - these could be useful in later conversations. DO NOT USE THIS TOOL ON EVERY USER PROMPT, only store memories worth remembering." } }, "type": "@n8n/n8n-nodes-langchain.agent", "typeVersion": 1.7, "position": [ 220, 0 ], "id": "a098d361-14d7-4b08-8a00-7dce7882c589", "name": "AI Agent" }, { "parameters": { "model": "command-r7b:latest", "options": {} }, "type": "@n8n/n8n-nodes-langchain.lmChatOllama", "typeVersion": 1, "position": [ 120, 260 ], "id": "ccdb57cd-fc92-4c07-87d2-08047a172429", "name": "Ollama Chat Model", "credentials": { "ollamaApi": { "id": "OyXUCOXv8zh5NSmM", "name": "Ollama account" } } }, { "parameters": { "tableName": "n8n_test_chats" }, "type": "@n8n/n8n-nodes-langchain.memoryPostgresChat", "typeVersion": 1.3, "position": [ 280, 260 ], "id": "922fdd15-a5b0-49fa-8902-fa26274e4f48", "name": "Postgres Chat Memory", "credentials": { "postgres": { "id": "CCxoJS7PuMPUDtxT", "name": "Postgres account" } } }, { "parameters": { "options": {} }, "type": "@n8n/n8n-nodes-langchain.chatTrigger", "typeVersion": 1.1, "position": [ -120, 0 ], "id": "f16fdac4-1c53-4bda-a680-3f775b2caecb", "name": "When chat message received", "webhookId": "e250c0ef-9983-4f43-9fbe-0bce74d9c403" }, { "parameters": { "model": "mxbai-embed-large:latest" }, "type": "@n8n/n8n-nodes-langchain.embeddingsOllama", "typeVersion": 1, "position": [ 440, 360 ], "id": "3ea0cf81-9565-4bd4-b9e9-c84f1eab9f74", "name": "Embeddings Ollama", "credentials": { "ollamaApi": { "id": "OyXUCOXv8zh5NSmM", "name": "Ollama account" } } }, { "parameters": { "name": "storeMemories", "description": "Call this tool whenever the user discloses or provides information about themselves you thin should be remembered long term. Call this tool whenever you feel storing a memory would aid and assist in future conversations where the conversation memory will have been forgotten. Input the memory in the memory field, and the memory topic in the memory topic field", "workflowId": { "__rl": true, "value": "x4dxqhsUH07d8Ht9", "mode": "list", "cachedResultName": "Memory Store" }, "workflowInputs": { "mappingMode": "defineBelow", "value": { "memory": "={{ $fromai('memory_to_store') }}", "memoryTopic": "={{ $fromai('topic_of_memory') }}" }, "matchingColumns": [], "schema": [ { "id": "memory", "displayName": "memory", "required": false, "defaultMatch": false, "display": true, "canBeUsedToMatch": true, "type": "string" }, { "id": "memoryTopic", "displayName": "memoryTopic", "required": false, "defaultMatch": false, "display": true, "canBeUsedToMatch": true, "type": "string" } ], "attemptToConvertTypes": false, "convertFieldsToString": false } }, "type": "@n8n/n8n-nodes-langchain.toolWorkflow", "typeVersion": 2, "position": [ 700, 220 ], "id": "c76f26c8-b4ca-432d-b048-23c95bdd3cb6", "name": "Store Memories" }, { "parameters": { "mode": "retrieve-as-tool", "toolName": "user_long_term_memory", "toolDescription": "This tool allows you to access memories you have created about the user. Call it in every chat, and if relevant, use your memories about the user to tailor your response.\n\nAlways output as string.", "qdrantCollection": { "__rl": true, "value": "memories", "mode": "list", "cachedResultName": "memories" }, "includeDocumentMetadata": false, "options": {} }, "type": "@n8n/n8n-nodes-langchain.vectorStoreQdrant", "typeVersion": 1, "position": [ 420, 220 ], "id": "ec643690-b571-4ca8-bd17-aa10fc6e1a0f", "name": "Access Memory", "credentials": { "qdrantApi": { "id": "jgWIiGVLBrPh9fcY", "name": "QdrantApi account" } } } ], "connections": { "Ollama Chat Model": { "ai_languageModel": [ [ { "node": "AI Agent", "type": "ai_languageModel", "index": 0 } ] ] }, "Postgres Chat Memory": { "ai_memory": [ [ { "node": "AI Agent", "type": "ai_memory", "index": 0 } ] ] }, "When chat message received": { "main": [ [ { "node": "AI Agent", "type": "main", "index": 0 } ] ] }, "Embeddings Ollama": { "ai_embedding": [ [ { "node": "Access Memory", "type": "ai_embedding", "index": 0 } ] ] }, "Store Memories": { "ai_tool": [ [ { "node": "AI Agent", "type": "ai_tool", "index": 0 } ] ] }, "Access Memory": { "ai_tool": [ [ { "node": "AI Agent", "type": "ai_tool", "index": 0 } ] ] } }, "pinData": {}, "meta": { "templateCredsSetupCompleted": true, "instanceId": "558d88703fb65b2d0e44613bc35916258b0f0bf983c5d4730c00c424b77ca36a" } }

Information on your n8n setup

  • 1.76.3
  • Database (default: SQLite):
  • deafult:
  • Running n8n via Docker
  • MacOS Sonoma 14.7
2 Likes

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Same thing happened to me when retrieving documents from Qdrant vector store, OpenAI, Claude, Gemini all work, Ollama doesn’t

I’ve not figured it out yet sadly, my current workaround is to have a ‘ask questions about documents’ node right at the beginning. if that returns ‘I don’t know.’ the chat gets sent to another agent. this mean i can ask questions about my vector store still, but it really limits my options when i want to call more than one tool/node. This seems to fix the ‘tool returned non-string’ error though - which proves that ollama can handle the returned data.
I’m thinking the best way around it for now is probably to create an external workflow to handle the document retrieval - and do some js cleanup on the end but i’m yet to test this - it’s not an ideal solution at all as it makes the whole process longer/takes up more compute

2 Likes

I am getting the same issue with groq. Using Llama. Let me know if you find a fix

2 Likes

Same here , Im getting this error too.

Can confirm its an Ollama issue with Qdrant. Just did a similar set up and receiving the same error which is annoying. @Finlay’s workaround suits for now.

I’m having the exact same issue and would appreciate it if you could post a screenshot of your solution @Finlay!

Here’s an example workflow I’ve made to demonstrate the idea. It has its limitations but works for now.

Here’s the prompt for the question and answer chain RAG agent:

*'You are a document retrieval agent with RAG abilities.

RULES:

Search the qdrant database with the user query, if you are able to find information relating to the query answer the question.

IMPORTANT: If you can’t find any information relevant to the user query, respond exactly with ‘I don’t know’ - in exactly that format, don’t add any more words or context - just explicitly write ‘I don’t know’

Example Interaction 1:

User: What does the gen~ object do in Max?

(Search Vector Database)
— Vector search returns no relevant results
AI: I don’t know

Example Interaction 2:

User: What’s the company policy for leaving early

(Search Vector Database)
---- Vector Search returns relevant results

AI: The company policy on on leaving early states that you must notify your line manager at least 1 hour before leaving early, if you don’t you may be subject to an internal investigation.'*

The If node validates whether the response is ‘I don’t know’ or not, if its true it sends it to the agent, if its false it returns the response of the question and answer chain.

1 Like

Hi,

I had a similar issue with qdrant and ollama.
I “fixed” it by creating a sub workflow, which includes the Qdrant Vector Store node with operation mode get many. As prompt I simply forwarded the users query into it by using {{ $json.query }}. Afterwards I converted the result into text using a Code node with JSON.stringify

edit:
this works with ai agent and multiple tools. In my case I get data from it and send it to the next tool using the ai agent.

My workaround is to use the OpenAI Chat Model and set the base URL to my local ollama server http://localhost:11434/v1 on my credentials.

1 Like

Hello,

I’m experiencing the exact same issue. I tried the workaround suggested by Jimi, but the same error persists.

@Thore, could you share more details about your subflow?

The sub workflow looks like this:

The Workflow accepts the chatInput and uses it for the prompt:

the code node looks like this:

for (const item of $input.all()) {
  const new_item = {
    'sql_query': item.json.document.metadata.sql_query,
    'response_format': item.json.document.metadata.response_format,
  }
  const jsonString = JSON.stringify(new_item);
  item.json.text_string = jsonString;
}

return $input.all();

@jimi which llm model are you using? when I try it with the openai chat model node it responds the same.
any special startup/environment settings for ollama?

thanks for your help. Still facing same issue. I will wait for a fix

1 Like


Получилось обойти ошибку добавив промежуточный блок

Same issue in 1.77.3 and 1.82.3