Problem with n8n node mistral chat model

I’m creating a rag agent with Mistral as LLM and Qdrant with Vector Store on a Linux server. My problem is that when I ask the agent something, it takes a long time to respond to something simple, and considering I don’t have more than 1000 points in Qdrant, it seems strange to me. I’ve noticed that when I run the agent, the chat model node doesn’t appear in green or appear in the list of logs that comes with the chat. This is the only thing I think the problem could be. However, I don’t understand why I can make queries to the Mistral API, but it’s slow and nothing appears when I run it. I attach evidence and my workflow.


This is node


here the n8n logs


But if you answer me the information I need


He even answers me here


This is what I get in the docker logs

  • n8n version: 1.80.5
  • Database (default: SQLite): postgresql
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: server linux

If you replace those models for OpenAI models, do you see similar behavior or does it work?

It’s the same problem :c

In that case, something might be wrong with your credentials.

One troubleshooting step would be to try custom HTTP Requests for making API calls without the Salesforce node.

But since that is a bit more complicated, please make sure you’ve configured your credentials correctly. Go over every step again and try to make simple calls to the API.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.