I have a database on a Qdrant with a vector of 256 dimensions. The embedding was done using the “text-embedding-3-large” model with the dimensions parameter set to 256.
Now I’m building a workflow with N8N with an agent that uses the database as knowledge.
In the “Qdrant Vector Store” sub-node, I inserted the “Embeddings OpenAI” node as the embedding. How can I set the 256 dimensions instead of the default 3072?
What is the error message (if any)?
This is the error I receive from the “Qdrant Vector Store” node:
Bad Request
Error cause:
You will have to use the same model for retrieval that was used for the embedding in order to set the same dimensions. So you will need to select the text-embedding-3-large model in the Qdrant Vector Store sub-node as well.
Thank you for your reply. Actually, the OpenAI text-embedding-3-large model allows setting the dimensions parameter with different options (256, 1536, 3072), not just one. In my case, the embedding was generated using the text-embedding-3-large model with dimensions set to 256.
It would be very helpful to have the ability to set this parameter in the Embeddings OpenAI node as well, to make it more flexible.
I have the same problem with my Postgres PGVector Store. Embedded data from a different backend (currently trying to switch to n8n) has dimension 1536 but n8n asks for 4096. I really don’t want to reindex all the data.