I’m creating a rag agent with Mistral as LLM and Qdrant with Vector Store on a Linux server. My problem is that when I ask the agent something, it takes a long time to respond to something simple, and considering I don’t have more than 1000 points in Qdrant, it seems strange to me. I’ve noticed that when I run the agent, the chat model node doesn’t appear in green or appear in the list of logs that comes with the chat. This is the only thing I think the problem could be. However, I don’t understand why I can make queries to the Mistral API, but it’s slow and nothing appears when I run it. I attach evidence and my workflow.
This is node
here the n8n logs
But if you answer me the information I need
He even answers me here
This is what I get in the docker logs
- n8n version: 1.80.5
- Database (default: SQLite): postgresql
- Running n8n via (Docker, npm, n8n cloud, desktop app): docker
- Operating system: server linux