AI Agent Node Not Working with OpenRouter, OpenAI, and Google Providers in n8n

Hi everyone,

I’m facing an issue with the AI Agent node in n8n. It works perfectly with Groq, but it doesn’t work with OpenRouter, OpenAI, or Google providers. However, the Basic LLM Chain node works fine with all these providers (OpenRouter, OpenAI, Google, and Groq). I’m running n8n on an Ubuntu server in a Docker container under my own domain. I suspect it might be related to function calling support in the models, but I’m not sure how to fix it. Has anyone encountered this issue or knows how to troubleshoot it? I’d appreciate any advice on model selection, configuration, or debugging steps. Please let me know if you need more details about my setup or versions.

Thanks! Video problem VIDEO ERROR

  • n8n version: 1.80.3
  • Database (default: SQLite): Postgress (local)
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Ubuntu last

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @Kasstiel could you tell,

  1. Which specific model versions are you using with each provider?
  2. What error messages appear when you try to use non-Groq providers?
  1. openrouter: qwen/qwen2.5-vl-72b-instruct:free
    cognitivecomputations/dolphin3.0-mistral-24b:free
    meta-llama/llama-3.3-70b-instruct:free
    google models/gemini-1.5-flash-latest


(i use base link opennrouter)

The models you’re using have varying degrees of function calling support, which is critical for the AI Agent node:

  1. OpenRouter model issues:
    1. qwen2.5-vl-72b: This vision-language model has inconsistent function calling support
    2. dolphin3.0-mistral-24b: This model requires explicit function calling format
    3. llama-3.3-70b-instruct: Should work but may need specific configurationTry these alternatives: (for debugging process)
    4. anthropic/claude-3-opus:function-calling
    5. anthropic/claude-3-sonnet:function-calling
    6. openai/gpt-4o:free

Thank you, I switched to GPT-4o-mini and the tool started working. I’ll look for some alternative tools on OpenRouter. But it’s strange why meta-llama/llama-3.3-70b-instruct:free doesn’t work on OpenRouter, while it works on Groq

1 Like
  1. Have you checked the OpenRouter configuration settings for function calling format differences?

The discrepancy between Groq and OpenRouter for the same model (llama-3.3-70b-instruct) is likely due to how each provider implements function calling:

  1. Implementation differences:
  • Groq may have added custom function calling wrappers around Llama 3.3
  • OpenRouter might be using a more direct implementation without these enhancements
  1. Working models for AI Agent node:
  • Stick with GPT-4o-mini since it’s working for you
  • For OpenRouter alternatives, try:
    • anthropic/claude-3-haiku:function-calling (cheaper than opus/sonnet)
    • mistralai/mistral-large:function-calling

The n8n AI Agent node requires robust function calling support that meets OpenAI’s implementation standard, which not all model providers fully match even with the same base model.

If my answer helped solve your issue, please consider marking it as the solution! A like would make my day if you found it useful! :robot::sparkles:

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.