AI Agent with Local LLM

Hello, I tried connecting a local LLM to N8n this week, but unfortunately it didn’t work as expected. I attempted to integrate it using both Ollama and as an OpenAI endpoint, yet I consistently encountered issues with function calling. Either it couldn’t call any function at all, or it always only called one function and never a second one—even when I explicitly specified it in the prompt. I spent the most time trying to get the new Mistral Small 3.1 running. With vllm, I couldn’t get tool calling to work at all, and with an Ollama 4q version, it only called one function. I would appreciate it if someone with experience in this area could get in touch.

Information on your n8n setup

  • n8n version: 1.83.2
  • Database (default: SQLite):
  • Running n8n via (Docker):

Hey @marc109

Let me know if this video answers your question:

Thank you for the answer. I skipped through the video, but unfortunately, I couldn’t identify anything related to my problem. n8n is set up, and I run the LLM through runpod. The LLM works fine for regular calls, but it causes problems when I use tool calling. Even when I deploy a model via Ollama that supports tool calling, it only calls one tool and then generates the final answer.

If necessary, I could provide a more detailed description tomorrow

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.