Help with OpenAI node using local LLM error

Hi, looking for some ideas regarding an issue getting responses from a Local LLM when using the OpenAI node.
Our LLM is deployed using the OpenAI format and can be reached by a number of other applications. When using N8N, after adding the URL. API the connection shows Green and in the List I can see the models that are available. However when attempting to get a response from the Model the error returned is “ 404 status code(no body) troubleshooting URL: http………/MODEL_NOT_FOUND

If using the HTTP Request node with the same connection information a response is returned from the LLM.

I have tried using all the other N8N LLM connector nodes but without success so any ideas on what to look at next would be really appreciated.

Hi,

Maybe in n8n we do not support as you want.

Thanks.

Hey @ageoldpanic !

I think I have read about this before somewhere and it’s about entering manually the model name id, instead relying on the drop-down list.

Or be sure to use the correct base url like /v1 or /v1/chat …

Hope it helps!

Cheers!

Thanks @Parintele_Damaskin for the link, I can see the models in the list and I have tried using ID rather than list to manually add in the named Model - same result.

What I am not sure about when it comes to how the Node references Model names, is that when I ran a Curl command to get the Model name, even though i use the correct name in the command the response always come back with the model name as “lmi” no matter which of the models I am referencing. Of course if I use the ID as “lmi” rather than the actual model name the problem is the same.

I did spend a fair few hours “Gemining” the topic but unfortunately without a result. So I am not quite sure where to further troubleshoot.

hmm… that ‘lmi’ …maybe is about routing, since i never seen in my localinstance in responses(maybe AWS and the container is not resolving the path/url… just guessing as you do).

so… maybe “hardcoding“ only the “lmi” instead model name as id?

Try as well with the curl coomand to repace the model value with lmi from within container.
Cheers!

1 Like

Thanks for the ideas and support, but this looks like I am not going to be able to get to the bottom of the issue as it stands. I am guessing that this is a combination of the way the local LLM is deployed along with the internal build of the OpenAI Node that is causing the issue. N8N are trying to get, us as a company, to use the solution so I will probably need to reach out to them and one of our LLM deployers once everydone is back from holidays