I want to set up a n8n workflow but instead of connecting to my local Ollama server for LLM queries, I want to connect to Fireworks.ai serverless/deployment for LLM queries.
I tried to check n8n integrations but it seems there is no support for Fireworks.ai yet.
Is there any way to achieve what I want?
Thanks
Describe the problem/error/question
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
while we don’t have a native Fireworks.ai node, they are using OpenAI API interface so you can just use OpenAI Chat Node and change the baseURL to https://api.fireworks.ai/inference/v1. I just tried, and was able to chat with llama-v3p1-405b-instruct: