Hi all, I’m new to setting up n8n and Ollama locally. When testing the local llama model, it returns the query parameters on the first attempt, but provides the correct response only on the second time I ask the same question. Is there any way to fix this? Thanks in advance!
First time:
Second time:
Workflow:


