Hi everyone,
I’m struggling with the OpenAI Embeddings node in my self-hosted n8n instance (Docker, latest image, OpenAI Embeddings Node v1.2).
Issue
-
In the Embeddings node, no embedding models (like text-embedding-ada-002 or text-embedding-3-small) appear in the dropdown—only chat/completion models such as GPT-4, GPT-4o, etc.
-
If I manually enter text-embedding-3-small as the model, I get the error:
400 status code (no body) -
My OpenAI API key is valid and works with other OpenAI nodes.
-
Pinecone is correctly connected and working.
-
There is no “Use Response API” option in the Embeddings node.
-
I’m running n8n as a Docker container (not via docker-compose).
What I’ve tried
-
Pulled the latest n8n image and restarted the container multiple times (docker pull n8nio/n8n:latest, etc.).
-
Confirmed that the Embeddings node is present in the workflow.
-
Checked if other embedding models are available (none are).
-
Tested my API key directly with OpenAI (it works).
-
Tried various credentials and settings combinations.
Additional Observations / UI Behavior
-
When I first open the Embeddings node, the input on the left side (e.g., from the Markdown node) is displayed correctly.
-
After closing and reopening the node, or after running the workflow, the error occurs and the input on the left side is no longer visible (see attached screenshots).
-
This is confusing, as the input is initially passed through correctly.
Questions
-
Is this behavior known?
-
Are there limitations for embedding models in the Embeddings node for self-hosted/Docker setups?
-
Is there a way to get embedding models to show in the dropdown?
-
Is this a setup issue or a general bug?
-
Is there a workaround to use embeddings in n8n and send them to Pinecone?