[Langhain][MistralAI] Use local model [DUPLICATE]

The idea is:

I have a n8n container running on my private network.
I also have a MistralAI container running on an other computer.
I would like to query my local model instead of the public one.

In the MistralAI node, we could add a field to define the server base url.

My use case:

Like any Langchain use case but it is completly free, secure, self-hosted.

I think it would be beneficial to add this because:

Mistral-small model can run on a computer and is as performant as GPT4. I could make tons of requests to my model for free and the data do not leave my network. This is super interesting for companies.

Any resources to support this?

Are you willing to work on this?

Maybe

I just found this PR: feat: Add endpoint parameter on Mistral Cloud API credentials by mprytoluk · Pull Request #8316 · n8n-io/n8n · GitHub

Hey @LucBerge,

This looks like it is a duplicate of Endpoint option in Mistral Cloud nodes

I am going to close this one, Feel free to pop a vote on the other request.