Describe the problem/error/question
I’m looking at evaluating OpenAI-compatible endpoints, which would be much easier if the OpenAI node could be configured to use an alternative base url. This seems to be a pretty common approach, given that openai’s api has established a pattern for interacting with these types of services. I could not find any documentation on how to do this, except for one random mention of an n8n employee instructing someone to do that for a regular http request. However, I didn’t see any actual example of doing this in practice, unless they were talking about just manually doing it in the base http node.
So, my question is, is there a way to set the requestDefaults.baseURL value as input to the OpenAI node, so that the default is overridden?
The compatible service I was looking to utilize is https://www.anyscale.com/
Information on your n8n setup
n8n version: 1.7.1 (Cloud)
Database (default: SQLite): NA
n8n EXECUTIONS_PROCESS setting (default: own, main): NA
Running n8n via (Docker, npm, n8n cloud, desktop app): Cloud
Operating system: NA
Welcome to the community
I suspect the random n8n employee was me at the moment it is not possibly to change the domain used by the OpenAI node and the work around would be to use the http request node and manually implement the api calls you want to make.
If we changed the node to allow the base url to be configured the service being used would need to implement the same api calls and authentication options that OpenAI uses or there will be issues and from there we would no doubt see issues where services are not doing that.
Looking at Anyscale I can’t see any Rest api documentation but I did take a quick look at their Python SDK and it doesn’t look like it has the same options or is really the same thing. Do you happen to have a link to their api docs that covers the OpenAI endpoints?
We should maybe look at updating the node to allow the change for the Azure endpoint or maybe creating a new node if there are differences with it, I still need to dig into that service as well.
Sure, here is the documentation. I thought you could see it without logging in, but now can’t find where that was. I will attach a screenshot of it. You can see they are showing using openai environment variables. That was my first thought, but then looking at the code I couldn’t tell if there was a common library being used that actually would leverage them.
Actually, looked a bit further and found a longer description of what is compatible or not between the two (although specific to the python library). I’m assuming that this would extend to the raw api as well.
BTW, I did want to clarify. I’m not sure if there is value in modifying this node if there is a narrow use case. I was more just interested to try the model out in comparison to gpt-4, given this article they wrote comparing llama-2 variants, gpt-4, and gpt3.5-turbo.
My interest was more in understanding if there was a way to override hard-coded defaults or options in existing nodes for n8n in general, without going all the way to a manual implementation. It seems like is a pattern shared by n8n and nodered in some cases where you can pass in data into a node that if it matches one of the configuration options provided by the node that it will use the value passed in instead.
So for our nodes we use the APIs directly and the credential or the node will contain the url in this case it is the node itself.
A quick way would be to run n8n from source and change the url in the nodes code but looking at the list of unsupported options it isn’t fully compatible and it will cause issues.
Best option would be to use the http request node which I don’t think would take long to configure so it should allow you to test.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.