Why does a flow work with OpenAI but not with Groq or OpenRouter?

I’m wondering what the difference could be between an OpenAI LLM and Groq or OpenRouter models. With the same nodes, same prompt and same flow: everything works fine when using a ChatGPT LLM, but not able to complete the tasks when I deploy it with other LLM’s (Llama, Gemini, Mixtral,…)

Working flow:

Using other LLM:

Error:

Output: 1 item
Model output doesn’t fit required format

To continue the execution when this happens, change the ‘On Error’ parameter in the root node’s settings

Other info
n8n version - 1.77.0 (Self Hosted)
Time - 1-2-2025, 19:03:39

Error cause

{ "level": "error", "tags": {} }

Structured Output Parser1

Parameters

Settings

[
Docs
](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparserstructured/?utm_source=n8n_app&utm_medium=node_settings_modal-credential_link&utm_campaign=%40n8n%2Fn8n-nodes-langchain.outputParserStructured)

Schema Type
JSON Example 1

[
    {
        "id": "{{ $json.id }}",
        "Title": "your new title",
        "Article": "rewritten article",
        "Prompt": "prompt for text-to-image generator",
        "Hashtags": "#Hashtag1 #Hashtag2 #Hashtag3 ...",
        "Date Created": "{{ $json.Date }}"
    }
]

System

  • Production n8nVersion: 1.77.0
  • Test n8nVersion: 1.77.0
  • Production platform: npm (shared hosting provider)
  • Test platform: docker (self-hosted)
  • production nodeJsVersion: 20.17.0
  • Test nodeJsVersion: 20.18.2
  • database: sqlite
  • executionMode: regular
  • concurrency: -1
  • license: enterprise (production)

storage

  • success: all
  • error: all
  • progress: false
  • manual: true
  • binaryMode: memory

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

All information was already present in the summary on the bottom of the post, marked “SYSTEM”
:slight_smile:

Hello. Not all LLMs can reliably generate structured JSON outputs. The other factor is smaller models may support structured output but struggle with your large prompt.

If you want to get it working with a specific model first check structured output works with a minimal prompt (e.g., “Output only a JSON object with a ‘Title’ field”).

Then you may need to split your workflow into smaller steps rather than doing everything in one go. Or work on formatting the prompt to be optimised for that specific model.

Another option I’ve come across recently is using DeepSeek, which does not support structured outputs. You can generate all the information in a free-form format with DeepSeek and then pass that information to another LLM that can easily structure the output without requiring extensive reasoning.

1 Like