Ollama Chat Model Node Produces Unexpected Results From Tools Call

I put this in as a bug, but they thought it might be config related. I don’t since tools calls were designed to enable some uniformity in code functionality. Only certain models are even built with it. Please let me know where you think this should be..

Bug Description

I was trying to build a simplified version of Saad Naveen’s RSS Newsletter flow, but I wanted to use Ollama instead of OpenAI. When connecting the flow the same as the working OpenAI flow, the run always fails. The failure occurs at the structured output parser, but it would appear that it fails because the output from the Tools call of the Ollama node is not close to what is expected.

This was a test, so The flow I have here converts XML from a YouTube RSS feed to JSON, shares it to a specialized agent, which reformats the JSON to be readied for the creation of an HTML newsletter, which is mailed out via gmail. Since I had trouble with the Ollama, I replicated via JavaScript and then, again, with OpenAI, exactly as in the initial example.

Here is the flow:

OpenAI flow result:

Ollama flow result:

It would appear, to me, that the input to the model nodes is identical, but the way the node handles the JSON is very different. As a result, the data sent back to the agent is lost in the structured output parser:

Here’s the error detsils:

To reproduce:

  1. Setup http request node against any youtube rss feed

  2. pass to an xml node and convert it to json

  3. use an agent and use the following as a source for prompt:
    Your job is to take this data {{ $json.feed.entry.toJsonString() }} and format it in the required output format. Also, summarize the video description at the same time as it’ll be used in a newsletter.

  4. Require specific output and attach a structured output parser with the following schema example:

[
  {
  	"title": "Video Title",
    "description": "Video Description",
  	"link": "https://www.youtube.com/shorts/5Pwe3TYnUrw",
    "thumbnail_url": "https://i1.ytimg.com/vi/ToW_AezocP0/hqdefault.jpg"
  },
  {
  	"title": "Video Title",
    "description": "Video Description",
  	"link": "https://www.youtube.com/shorts/5Pwe3TYnUrw",
    "thumbnail_url": "https://i1.ytimg.com/vi/ToW_AezocP0/hqdefault.jpg"
  }
]

  1. Attach an Ollama model node to the agent, use ollama credentials and use a gpt-oss-20b (it was as close as I saw as the gpt-4o-mini from the working test.

  2. Execute.

Expected behavior

Expeted behavior is that it formats the json appropriately to be processed by the following in an html node:
const videos = $input.first().json.output;

let html = `

DevCentral YouTube Channel Newsletter

Latest Videos from DevCentral

`;

for (const video of videos) {
const thumbnail = video.thumbnail_url || ‘’;
html += <div style="border-bottom: 1px solid #e0e0e0; padding-bottom: 20px; margin-bottom: 20px; display: flex; align-items: flex-start; gap: 32px;"> <div aria-label="Video thumbnail" style="flex-shrink: 0; width: 120px; height: 67px; border-radius: 8px; overflow: hidden; box-shadow: 0 3px 8px rgba(0,0,0,0.15); margin-right: 16px;"> <a href="${video.link}" target="_blank" rel="noopener noreferrer"> <img src="${thumbnail}" alt="Thumbnail for ${video.title}" style="width: 100%; height: 100%; object-fit: cover; display: block;" /> </a> </div> <div style="flex-grow: 1;"> <p style="font-size: 18px; font-weight: 600; margin: 0 0 8px 0; color: #222222;"> <a href="${video.link}" target="_blank" rel="noopener noreferrer" style="color: #ff6d5a; text-decoration: none;" tabindex="0">${video.title}</a> </p> <p style="font-size: 14px; color: #555555; margin: 0; line-height: 1.5;">${video.description}</p> </div> </div>;
}

html += `

`;

// Return the full HTML for use in the next node
return [{ json: { newsletterHtml: html } }];

1 Like

This happens with models a lot. If you add the auto-fix switch and toggle it on in your structured output, and add a model to it, it can often fix it. You might want to make your system prompt better.

I tried the auto-fix and that failed.. used code focused models, too. That’s definitely not the problem, though. By the time the structured output parser gets the data, it’s already been incorrectly sent back to the agent by the model. This is clearly seen in my pictures. The problem seems to very clearly be in the way the Ollama node is using tools. Look at that node’s output versus the working openai node. Ollama has it completely wrong.

I also tried a much more focused and long prompt. That fails with the exact same issue: the Ollama node output is incorrect as it passes back to the agent, before it hits the structured output parser.

Can you use a diffrent model? This might just be an issue with the model you are using, it does not know how to properly preform the task.

I have tried every model with a tools implementation that is available via Ollama. None of them work. All fail with the same error. This particular run was photographed with the most simlar model I could find to my working OpenAI flow.. gpt oss.

I really think this is an issue in the tools implementation of the Ollama node.

1 Like

I agree, it’s probally a Ollama issue, if it is not working correctly.

2 Likes

If you can try a diffrent provider such as chatgpt, or gemini, or openrouter, let me know how it works with that.

1 Like

Yep. Already did it with OpenAI. Worked first time exactly as advertised. That is shown in the screenshots. Watching OpenAI work is why I tried Ollama with gpt-oss, as it’s arguably a similar model to 4o-mini.

1 Like

Yea, maybe you could edit the models parameters somehow, but at least we figured out its an ollama issue as you suspected!

2 Likes

I’m trying to get my git issue reopened. They don’t seem to be understanding what I’m trying to tell them.

1 Like

Just wondering but do you have the option to add an accept: application/json to the header? That way the output data would already be in Json and you don’t need the LLM to convert it.

And if that doesn’t work, use a convert from XML to json node?

1 Like

I will try headers, but I’m already using an XML to json node. The json gets to the agent just fine.. exactly as the code path and the openai path in my example. The problem is the output from the Ollama node. It does not look anything like the code or openai outputs at all. I have tried every model I can find that supports tools. There are not a ton, but I’ve tried at least 5 and all have failed the exact same way: the output from the ollama node is incorrect.

1 Like