Ollama chat Node request to extend the 5 mins timeout [GOT CREATED]

The idea is:

Ollama provided 'keep_alive" parameter in API call, while N8N Ollama node doesn’t support that. As a user of Continue with the ollama LLM serving backend, I frequently experience long delays in responses in my workflow as ollama unloads the model and weights after 5 minutes by default. Ollama recently added support for the keep_alive parameter in requests which can prevent unloading or make the model in-memory persistence configurable. Please add support for configuring the keep_alive parameter and adding it to inference requests sent to the ollama backend. The parameter has been added to ollama 0.1.23 through the merged pull request here: add keep_alive to generate/chat/embedding api endpoints by pdevine · Pull Request #2146 · ollama/ollama · GitHub (edited)

My use case:

Use Ollama for local AI model but faced the long timeout issue.

I think it would be beneficial to add this because:

For private consideration, using local LLM model is exepcted.

Any resources to support this?

(add keep_alive to generate/chat/embedding api endpoints by pdevine · Pull Request #2146 · ollama/ollama · GitHub

Are you willing to work on this?

Able to do testing anytime.

Hey @WTeeth,

Don’t forget to vote to make it count.

New version [email protected] got released which includes the GitHub PR 9215.