N8n workflow using Fireworks.ai serverless/deployment for queries?

I want to set up a n8n workflow but instead of connecting to my local Ollama server for LLM queries, I want to connect to Fireworks.ai serverless/deployment for LLM queries.

I tried to check n8n integrations but it seems there is no support for Fireworks.ai yet.
Is there any way to achieve what I want?

Thanks

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @h8h,

while we don’t have a native Fireworks.ai node, they are using OpenAI API interface so you can just use OpenAI Chat Node and change the baseURL to https://api.fireworks.ai/inference/v1. I just tried, and was able to chat with llama-v3p1-405b-instruct:

Here’s a very simple workflow that demonstrates it:

I hope that helps!

Oleg

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.