I’ve running deepsek r1 distilled Qwen model on a server with vllm and a self-hosted n8n to use it.
On normal cases it works well but when I add some tools it doesn’t work.
I’ve enabeld vllm configuration and when I tried to send it using curl (or http) it respond with tool choice, but on an AI Agent it doesn’t work at all.
Your tool is connected to the memory connector instead of Tools
Your tool name is Calculator1, which is something you probabaly want to change to Calculator, since this is what you told the LLM is the name of the tool.
I’ve been running DeepSeek-R1 Distilled Qwen via vLLM and n8n—works well without tools, but the AI Agent tool integration fails. Likely due to inference server or prompt misconfigurations. Consider using DeepSeek locally via LM Studio for easier setup, full offline control, and seamless tool-based workflows.
I’ve installed a smaller version of the same distilled model and same trouble….
Finally I installed Qwen3 directly (Qwen3-30B-A3B-Thinking) and it works well.