How to use embedding models with OpenRouter

Hi everyone,
I have used this OpenRouter credential settings that works as a chat model. However, I have the issue with using it as embedding model. Has anyone succeeded? Thanks!

Describe the problem/error/question

It does not seem to work the same way with any model I tried. Can we use embedding with open router, and model with for example: “google/gemma-2-9b-it:free” or “text-embedding-3-small” ?
Or what models with the OpenRouter could work for embedding?

What is the error message (if any)?

“404 Not Found Troubleshooting URL: MODEL_NOT_FOUND | 🦜️🔗 Langchain

Please share your workflow

Information on your n8n setup

  • n8n version: Version 1.76.1
  • Database (default: SQLite): N.A
  • n8n EXECUTIONS_PROCESS setting (default: own, main): don’t quite understand this, I think it’s not relevant for the topic here too.
  • Running n8n via: Coolify Docker
  • Operating system: Self hosted VPS, running Ubuntu with Coolify panel.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Information is already listed below.

Commenting to follow this thread!

1 Like

OpenRouter currently does not provide access to embedding models through its API. At the moment, it only offers API access to LLM models. To use ‘text-embedding-3-small,’ you may need to use an OpenAI API key instead.

2 Likes

Right, thank you. I am still finding various ways for connection, instead of the limited embedding.

@AADev, did you find a way with openrouter or another free embedding?
thanks

Hi, I moved to use Google Gemini text embedding model for now.

1 Like

use http request. Chunk with code node.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.