AI Agent outputs Qdrant Vector Store json and not an actual response

Hello, first time using n8n and trying to make a simple chatbot using Ollama model llama3.2:3B and using Qdrant Vector Store to store word documents containing FAQ’s.

I’m testing things using postman so that the chat can be called in a mobile app with custom UI. Most of the time the output is as expected, but sometimes i get these weird outputs probably intended for Qdrant, e.g

This also happens inside the web UI so it’s not a postman specific issue.

No errors, just returning a json that doesn’t answer the question. Anyone have any idea what’s causing this?

Last node Output:

Information on your n8n setup

  • n8n version: Version 1.100.1
  • Database: PostgreSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker Compose
  • Operating system: Windows 11 (using WSL)

Bonus question, why is Ollama taking a long time to respond up to a minute or more even for simple queries. Is it because of hardware issues? I am using this in my work laptop

Hello @nonilius_1998 , welcome!

It’s common that some models in ollama don’t work too well with tool calls especially smaller models like the one you use.
In this case it seems like the model actually just sent the tool call as final response. You could try to use a bigger model although in general llama3.2:3b should be able to call tools.

So you could try to use a different model here and see if you get better results.
The general performance when processing a query locally is strongly dependent on your system so that’s very likely a hardware problem.

1 Like

Yeah, i think the smaller model was the probable cause of this. Tried using the 8B model, although took a significantly longer time to respond, the response is what the LLM usually outputs. Thank you for answering @bens