I am trying to create a AI Agent with Ollama Mistral 7B in the Chat Model. The problem that I facing is the responses are not coming in consistently. Either there is too much delay like more than 5 minutes for just a simple “Hi” message, or the node is not working and the entire path comes as red line. Sometimes the credentials not found error comes up. Can I get any steps of resolution for this? Is there any hardware requirement for Ollama useage?
Hi, thanks for the reply. Its very quick maximum I see the response to come in 1 sec, starts in 1 sec but loads word by word. I see the problem when connecting to N8N AI Node. Any recommendations further to improve this ?
Hi I see the text to load up word by word. It takes 1.5 minutes to load the complete message. But I can see 1 word by 1 word. mistral:7b f974a74358d6 8.3 GB 20%/80% CPU/GPU 4 minutes from now