Local LLM models don't obey

I have problem with local models. I cannot make local models obey. The workflow works perfectly with models pay. Do you have any tips?

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

While the paid models return structured parse without problems, the local models do not understand the prompts nor tool node Structured Output, causing parse errors and stop flow.

I use local n8n latest in Docker.

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Dear Beto,

You may want to slightly adjust your prompt to suit Ollama models, as their output can differ from paid cloud models due to differences in parameter size and training data. I’ve also run into similar issues, but after some experimentation with prompt phrasing, I managed to get my local models working.

If you’re willing to share more details, I’d be happy to help further. Could you provide:

  • The Ollama model and version you’re using

  • The prompt you’re sending to the model

  • The output structure you expect, and what you actually receive

Thanks! This info will help pinpoint what’s going wrong and how best to modify your workflow.