I’m facing an issue with the AI Agent node in n8n. It works perfectly with Groq, but it doesn’t work with OpenRouter, OpenAI, or Google providers. However, the Basic LLM Chain node works fine with all these providers (OpenRouter, OpenAI, Google, and Groq). I’m running n8n on an Ubuntu server in a Docker container under my own domain. I suspect it might be related to function calling support in the models, but I’m not sure how to fix it. Has anyone encountered this issue or knows how to troubleshoot it? I’d appreciate any advice on model selection, configuration, or debugging steps. Please let me know if you need more details about my setup or versions.
The models you’re using have varying degrees of function calling support, which is critical for the AI Agent node:
OpenRouter model issues:
1. qwen2.5-vl-72b: This vision-language model has inconsistent function calling support
2. dolphin3.0-mistral-24b: This model requires explicit function calling format
3. llama-3.3-70b-instruct: Should work but may need specific configurationTry these alternatives: (for debugging process)
4. anthropic/claude-3-opus:function-calling
5. anthropic/claude-3-sonnet:function-calling
6. openai/gpt-4o:free
Thank you, I switched to GPT-4o-mini and the tool started working. I’ll look for some alternative tools on OpenRouter. But it’s strange why meta-llama/llama-3.3-70b-instruct:free doesn’t work on OpenRouter, while it works on Groq
The discrepancy between Groq and OpenRouter for the same model (llama-3.3-70b-instruct) is likely due to how each provider implements function calling:
Implementation differences:
Groq may have added custom function calling wrappers around Llama 3.3
OpenRouter might be using a more direct implementation without these enhancements
Working models for AI Agent node:
Stick with GPT-4o-mini since it’s working for you
For OpenRouter alternatives, try:
anthropic/claude-3-haiku:function-calling (cheaper than opus/sonnet)
mistralai/mistral-large:function-calling
The n8n AI Agent node requires robust function calling support that meets OpenAI’s implementation standard, which not all model providers fully match even with the same base model.
If my answer helped solve your issue, please consider marking it as the solution! A like would make my day if you found it useful!