Vector nodes store wrong dimension - always 192

Describe the problem/error/question

For some reason n8n-vector store nodes always return vectors in 192 dimension to the vector-database.
The emebedding models I tried should return 768 vectors according to their datasheet and do so if I call them via terminal or http-node with some test input.
So the model, and LM-Studio as hosting framework, dont seem to be the problem.
I also explicitly checked with chroma and qdrant that they allow for 768 collections if i set them manually.
For my understanding there seems to be an issue with either the Vector store nodes or with the openai embedding tool node.

Tested Nodes:

  • Chroma Vector Store
  • Qdrant Vector Store
  • Simple Vector Store

Tested Embedding Models:

  • text-embedding-embeddinggemma-300m-qat
  • text-embedding-jina-embeddings-v2-base-de
  • text-embedding-granite-embedding-278m-shindy-multilingual
  • text-embedding-nomic-embed-text-v2-moe

What is the error message (if any)?

  • No error message

Please share your workflow

Workflow Returning wrong 192 vectors:

Working HTTP Request => returns 768 dimensioned vectors

Additional:

If I manually setup a 768 DB via qdrant UI an error is raised that the vector settings are not matching:

I can not manually set the embedding node to 768 it’s not available

I called manually via terminal 768 vectors are returned, to the model works fine

Information on your n8n setup

Selfhosted as LXC Container via Proxmox

  • 2.17.8:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (v1):
  • Running n8n via (LXC, npm):
  • Alpine/Linux:

Hi @swdit

The problem is that you’re using the Embeddings OpenAI node pointed at LM-Studio. That node is built around OpenAI’s embedding models and their specific dimension values (256, 512, 1024, 1536, 3072), which is why 768 isn’t in the dropdown. Somewhere in the node’s processing pipeline, your 768-dimension vectors are being truncated to 192 (which is exactly 768/4, so it’s likely not random).

Two things to try:

  1. Use the Embeddings Ollama node instead if your LM-Studio instance exposes an Ollama-compatible API, or check if there’s an LM-Studio community node that handles non-OpenAI embedding dimensions properly.

  2. As a workaround, try setting the Dimensions option explicitly in the Embeddings OpenAI node. Even though 768 isn’t in the dropdown, switch the field to Expression mode and type 768 manually. The dropdown is just a UI convenience for OpenAI’s standard values, but the underlying API parameter might accept arbitrary numbers.

If the expression workaround doesn’t work, this is worth filing as a GitHub issue. The Embeddings OpenAI node is commonly used with OpenAI-compatible backends (LM-Studio, Ollama, vLLM, etc.) and should pass through whatever dimension the model actually returns rather than forcing OpenAI-specific values.

Could you test the expression approach and let me know?

hello @houda_ben

thanks for the cool asistance - I checked your ideas:
As for now:

  • Ollama node is not compatible with LM-Studio.

  • The expression attempt, does not work.
    It does not seem to be possible to set arbitary numbers inside OpenAI embedding node.

  • I am currently setting up a ollama server in parallel to be able to use the ollama node.

@swdit thanks for testing both,

I’d recommend filing a GitHub issue for this. The Embeddings OpenAI node is widely used with OpenAI-compatible backends (LM-Studio, vLLM, LocalAI, etc.) and should either pass through the model’s native dimensions or allow arbitrary values in the Dimensions field. Your screenshots and test results make a solid reproduction case.

:crossed_fingers:

Confirmed: the OpenAI Embeddings node is not the right fit here if it can’t pass arbitrary dimensions like 768.

Setting up Ollama in parallel is probably the cleanest path if you want to stay inside n8n’s AI/vector-store nodes.

One more workaround, if you want to keep LM Studio:

Instead of using the Embeddings OpenAI node, call LM Studio directly with an HTTP Request node, since you already confirmed it returns 768-dimensional vectors. Then either:

  1. send the vectors to Qdrant through Qdrant’s REST API using HTTP Request nodes, or
  2. use a Code node to reshape the response into the format expected by the next node.

The key requirement is:

  • LM Studio embedding output: 768 dimensions
  • Qdrant collection vector size: 768
  • n8n node passing the full vector unchanged

If any node in the middle assumes OpenAI dimensions or transforms the vector, Qdrant will reject it.

Also, when testing, make sure the Qdrant collection is recreated after changing the vector size. Qdrant will not let an existing collection switch from 192 to 768 dimensions.

So yes, Ollama node is likely the easiest native n8n workaround, but the HTTP Request → LM Studio → Qdrant REST route should also work and avoids the OpenAI node limitation completely.

I’ve now setup a ollama server up and running and now it works the emebedding results are as to be expected.

I will reach out to LM-Studio and n8n and recommend to either fix this or at least raise an error.