Local llama gives query instead of answer

Hi all, I’m new to setting up n8n and Ollama locally. When testing the local llama model, it returns the query parameters on the first attempt, but provides the correct response only on the second time I ask the same question. Is there any way to fix this? Thanks in advance!

First time:

Second time:

Workflow:

It depends on the models, and of course the system prompt with clear instructions as well tool descriptions.

1 Like