For some reason n8n-vector store nodes always return vectors in 192 dimension to the vector-database.
The emebedding models I tried should return 768 vectors according to their datasheet and do so if I call them via terminal or http-node with some test input.
So the model, and LM-Studio as hosting framework, dont seem to be the problem.
I also explicitly checked with chroma and qdrant that they allow for 768 collections if i set them manually.
For my understanding there seems to be an issue with either the Vector store nodes or with the openai embedding tool node.
The problem is that you’re using the Embeddings OpenAI node pointed at LM-Studio. That node is built around OpenAI’s embedding models and their specific dimension values (256, 512, 1024, 1536, 3072), which is why 768 isn’t in the dropdown. Somewhere in the node’s processing pipeline, your 768-dimension vectors are being truncated to 192 (which is exactly 768/4, so it’s likely not random).
Two things to try:
Use the Embeddings Ollama node instead if your LM-Studio instance exposes an Ollama-compatible API, or check if there’s an LM-Studio community node that handles non-OpenAI embedding dimensions properly.
As a workaround, try setting the Dimensions option explicitly in the Embeddings OpenAI node. Even though 768 isn’t in the dropdown, switch the field to Expression mode and type 768 manually. The dropdown is just a UI convenience for OpenAI’s standard values, but the underlying API parameter might accept arbitrary numbers.
If the expression workaround doesn’t work, this is worth filing as a GitHub issue. The Embeddings OpenAI node is commonly used with OpenAI-compatible backends (LM-Studio, Ollama, vLLM, etc.) and should pass through whatever dimension the model actually returns rather than forcing OpenAI-specific values.
Could you test the expression approach and let me know?
I’d recommend filing a GitHub issue for this. The Embeddings OpenAI node is widely used with OpenAI-compatible backends (LM-Studio, vLLM, LocalAI, etc.) and should either pass through the model’s native dimensions or allow arbitrary values in the Dimensions field. Your screenshots and test results make a solid reproduction case.
Confirmed: the OpenAI Embeddings node is not the right fit here if it can’t pass arbitrary dimensions like 768.
Setting up Ollama in parallel is probably the cleanest path if you want to stay inside n8n’s AI/vector-store nodes.
One more workaround, if you want to keep LM Studio:
Instead of using the Embeddings OpenAI node, call LM Studio directly with an HTTP Request node, since you already confirmed it returns 768-dimensional vectors. Then either:
send the vectors to Qdrant through Qdrant’s REST API using HTTP Request nodes, or
use a Code node to reshape the response into the format expected by the next node.
The key requirement is:
LM Studio embedding output: 768 dimensions
Qdrant collection vector size: 768
n8n node passing the full vector unchanged
If any node in the middle assumes OpenAI dimensions or transforms the vector, Qdrant will reject it.
Also, when testing, make sure the Qdrant collection is recreated after changing the vector size. Qdrant will not let an existing collection switch from 192 to 768 dimensions.
So yes, Ollama node is likely the easiest native n8n workaround, but the HTTP Request → LM Studio → Qdrant REST route should also work and avoids the OpenAI node limitation completely.