Locally hosted LLM is not able to call tools

Hi @Jon,

thank you for reaching out quickly.

If you take a look at the model card from NVD it specifically states that the model is trained for tool calling.

I also tested if it was smart enough by changing the Agent Type to “OpenAI Functions Agent”. With that Agent type i get a 400 error (no body) error response from n8n.

This is the console log of using the Model in the “OpenAI Functions Agent”:

2025-06-27T11:08:45.467Z | error | 400 status code (no body) {"file":"error-reporter.js","function":"defaultReport"}
2025-06-27T11:08:45.467Z | debug | Running node "AI Agent" finished with error {"node":"AI Agent","workflowId":"jOqu92akylxZQm06","file":"logger-proxy.js","function":"exports.debug"}
2025-06-27T11:08:45.467Z | debug | Executing hook on node "AI Agent" (hookFunctionsPush) {"executionId":"6809","pushRef":"wvad9rmsml","workflowId":"jOqu92akylxZQm06","file":"execution-lifecycle-hooks.js"}
2025-06-27T11:08:45.468Z | debug | Pushed to frontend: nodeExecuteAfter {"dataType":"nodeExecuteAfter","pushRefs":"wvad9rmsml","file":"abstract.push.js","function":"sendTo"}
2025-06-27T11:08:45.468Z | debug | Workflow execution finished with error {"error":{"level":"warning","tags":{},"context":{},"functionality":"configuration-node","name":"NodeApiError","timestamp":1751022525464,"node":{"parameters":{"notice":"","model":{"__rl":true,"value":"nvidia/llama-3.3-nemotron-super-49b-v1","mode":"list","cachedResultName":"nvidia/llama-3.3-nemotron-super-49b-v1"},"options":{}},"type":"@n8n/n8n-nodes-langchain.lmChatOpenAi","typeVersion":1.2,"position":[-840,-320],"id":"cec34fcd-ddfd-4bcb-b4bd-b97031e8ee17","name":"Local","notesInFlow":true,"credentials":{"openAiApi":{"id":"dX1EaCNOPnvtPwDG","name":"Local Reasoning Model"}}},"messages":["400 status code (no body)"],"httpCode":"400","description":"400 status code (no body)","message":"Bad request - please check your parameters","stack":"NodeApiError: Bad request - please check your parameters\n    at Object.onFailedAttempt (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_9ca6f82764a6c40719e9f8a538948cbd/node_modules/@n8n/n8n-nodes-langchain/nodes/llms/n8nLlmFailedAttemptHandler.ts:26:21)\n    at RetryOperation._fn (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/p-retry/index.js:67:20)\n    at processTicksAndRejections (node:internal/process/task_queues:105:5)"},"workflowId":"jOqu92akylxZQm06","file":"logger-proxy.js","function":"exports.debug"}

!!! IMPORTANT
The part that is making me curious: When i use Plan & Execute Agent with the model and attach a Wikipedia tool, it is able to use the tool and come back to the user.

The tests above tell me that the model is in theory capable of calling tools, but somewhere else might be a problem I am not able to see.

Questions that came up from this:

  • is there an difference in how tools are called between the Plan & Execute Agent, OpenAI Functions Agent, Tools Agent
  • Which Agent would be the correct one for this case (the llama3.3 uses the OpenAI API standard)

If I can help you with more information or debugging logs please let me know and I am happy to assist