It would be great if the “Ollama chat model” node had a timeout option for requests to Ollama.
It would help if there was a node for:
My use case:
My use case:
When working with Ollama models, especially in production environments, requests can sometimes hang indefinitely due to network issues, model loading delays, or other unforeseen circumstances. A timeout option would:
- Prevent workflows from getting stuck indefinitely on Ollama requests
- Allow for better error handling and retry mechanisms
- Improve overall workflow reliability and stability
- Enable setting appropriate timeout limits based on model complexity and expected response times
Any resources to support this?
- Ollama REST API documentation shows that timeout can be handled at client level
- Many HTTP clients (axios, fetch, etc.) support timeout configurations
- Similar timeout options exist in other n8n nodes for API integrations
Are you willing to work on this?
Yes, I would be willing to contribute to implementing this feature if guidance is provided.