I’m wondering what the difference could be between an OpenAI LLM and Groq or OpenRouter models. With the same nodes, same prompt and same flow: everything works fine when using a ChatGPT LLM, but not able to complete the tasks when I deploy it with other LLM’s (Llama, Gemini, Mixtral,…)
Working flow:
Using other LLM:
Error:
Output: 1 item
Model output doesn’t fit required format
To continue the execution when this happens, change the ‘On Error’ parameter in the root node’s settings
Other info
n8n version - 1.77.0 (Self Hosted)
Time - 1-2-2025, 19:03:39
Error cause
{ "level": "error", "tags": {} }
Structured Output Parser1
Parameters
Settings
[
Docs
](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparserstructured/?utm_source=n8n_app&utm_medium=node_settings_modal-credential_link&utm_campaign=%40n8n%2Fn8n-nodes-langchain.outputParserStructured)
Schema Type
JSON Example 1
[
{
"id": "{{ $json.id }}",
"Title": "your new title",
"Article": "rewritten article",
"Prompt": "prompt for text-to-image generator",
"Hashtags": "#Hashtag1 #Hashtag2 #Hashtag3 ...",
"Date Created": "{{ $json.Date }}"
}
]
System
- Production n8nVersion: 1.77.0
- Test n8nVersion: 1.77.0
- Production platform: npm (shared hosting provider)
- Test platform: docker (self-hosted)
- production nodeJsVersion: 20.17.0
- Test nodeJsVersion: 20.18.2
- database: sqlite
- executionMode: regular
- concurrency: -1
- license: enterprise (production)
storage
- success: all
- error: all
- progress: false
- manual: true
- binaryMode: memory