I am running a local instance of n8n on a not too powerful machine. That means that I am limited on which LLMs I can run. For certain things smaller LLMs suffice and I am able to attach them as a chat model to an AI agent. For other purpuses I would like to utilize venice.ai, which is a privacy oriented service that runs open sourced LLMs. There is currently no option to add either a venice.ai chat model to an AI agent. I don’t see a way to custom integrate it as chat model to an AI agent either? Is there a way?
It looks like your topic is missing some important information. Could you provide the following if applicable.
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system:
Hi @JohnS0N,
Welcome to the community
Tip for sharing your workflow in the forum
Pasting your n8n workflow
Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </>
(preformatted text) in the editor and pasting in your workflow.
```
<your workflow>
```
Make sure that you’ve removed any sensitive information from your workflow and include dummy data or pinned data as much as you can!
A great question! You can always work around an integration we don’t have yet by using an HTTP Request node to interact with venice.ai’s API and then connecting it to your AI agent workflow. This would also be a good feature request, you can create one here and upvote
Cheers, that’s what I am currently doing. A mix of local LLM and http requests to venice.ai. Would love a proper integration still, so will make a feature request.
hey @JohnS0N,
how does venice.ai integrate in n8n and your workflow?
Are you satisfied with its output? maybe in comparison with chatgpt or claude if you tried those ones as well…?
If it’s working for you it’d be awesome if you could share your http request
node configuration for it
Also when you say:
I am running a local instance of n8n on a not too powerful machine
what are you talking about in term of cpu and ram…?
Are you running Ollama on it? How does it perform?
Thanks for your insight
Hey @headset907, the cpu is AMD Ryzen™ 3 4300U, 2.7ghz 4 cores with 64 GB of RAM. I do run ollama locally and can run smaller models and it takes a bit of time. These models are not too good and often make mistakes so results are less then optimal for most cases, except for simpler tasks.
About Venice.ai, I actually figured out a little trick, because they also run on OpenAI’s SDK, so I can just rewrite the base url in the openai node and make it think I’m connecting to chatgpt in credentials. The credentials will appear as not working, but will in fact work in the node itself, the node is openai.
However, only later I’ve seen that Venice’s.ai does not yet support tools so at the moment I (as far as I know) am not able to use Venices API for agents that perform tasks, only for chat input and output, which can be done with a http request node as well…
The Api is currently in beta and I think they will support tools eventually, they kind of have to, but have no timeframe for that.
So Venice.ai is more or less on hold for me because of that.
@JohnS0N, thanks a lot for the detail answer!
Ok, yeah, running our own ollama models still required quite some power.
I like your trick for venice with the openai node. So you’re creating an openai credential but with your venice api key…? I’m not sure to understand correctly.
Would you mind sharing some screenshot of your solution?
When you say they still don’t support tools, you’re saying that even using their doc and a http request
node you can’t access all their services yet? (I’m still learning as you can see )
Thanks a lot
We’re in great luck brother. I contacted Venice and they, just today, enabled tools!
You could access the chat models before, but the LLMs did not offer tools, meaning your Agents could not perform actions. You could only, via a http request, get responses to your queries. Now, the agents, can perform actions via Venice.
So, to set this up you create a new credential as if it was an OpenAI credential, but you input your Venice API. The credential will not be recognized as if it is working.
Then in your agent you pick the chat model to be from OpenAI and select this OpenAI credential. And then in the settings below you find base url and rewrite it to be “https://api.venice.ai/api/v1”
This is how it works for openrouter and should work for Venice.AI as well, however right now it’s still throwing me an error, perhaps the tools have not been enabled yet, so we should check periodically or contact the team again. I will do so in the next couple of days.
I tried to use OpenAi Chat Model with Venice.ai, but unfortunately, their API does not seem compatible.
I ended up creating a community node to be able to use Venice.ai.
It currently does not support tools and memory, but it works fine with Chat and generating images.
aha! @JohnS0N thanks so much for the following up man!
I’ll try and will get back to you
Nice @tbystrican !!! Thanks a lot for sharing, that’s so cool!
I just tried it with the image generation and it send me back some data!
When I download it it’s a json file, how can I have a jpg or png of it to be able to see the actual picture?
Also, do you think you could include the venice logo in the credential to have it showing up like the openAI one?
Thank you so much for this contribution, this project is so cool!
Hi @headset907 ,
I did not find a way to set the image for the credentials. If you do, let me know, and I’ll add it.
Regarding the binary image, the current version only supports base64 encoded images which can be converted to binary using the Convert To File node.
Here is a sample workflow:
aha! it works! so cool, ok I’ll be back at it tomorrow and will ping you if I found a solution for that icon thingy (and questions if I have some)
Thanks dude it’s so freaking cool
This is awesome, I will take a look at it and play with the node!
Haven’t gotten as far as to create a custom node in n8n yet, but should not be too hard I hope. Adding tools and memory would be crazy good.
Agree, memory and tools would be nice to have.
I’ll see what I can do, no promises when and if.
I have just published a new version, which now handles “Return Binary” option for images.
The Venice logo icon in credentials is still an issue. It works in my dev instance, but I did not have success with it when I imported the community node from npm.
Changelog
v1.1
- first public version
v1.2
- new credentials node with API key verification
- venice logo icon added to credentials node
- added model filtering to only show text models for Chat and image models for images
v1.3
- Return Binary image option added
Man, you rock @tbystrican !
-
How should I do to get the new version? Remove the node and re-install it or I can update it somewhere?
Found it, in the settings! -
I see the prompt is limited to 250 characters, which quite small to get a good description. Is there a way to increase it or it’s limited by venice?
-
I don’t see the
Return Binary
option as shown in your screenshot
-
I removed and re-added my credential but still don’t see the logo
Thanks for doing this
You need to go to Settings > Community Nodes and click on UPDATE.
I had to also restart n8n, but not sure if that will be also your case.
The 250 characters limit seems to be Venice API limitation for image generation.
hey @tbystrican,
I reinstalled the node, deleted and re-created the credential and restarted my n8n instance.
- Now I’m getting the
Return Binary
toggle! - and the credential present me a broken logo image, making progress!
Good to hear, yep the logo is a known issue