API QWEN Embeddings

Hello everyone! Does anyone know how to connect QWEN Embedding via the official API in n8n? Or maybe it’s possible to create an equivalent workflow using HTTP REQUEST nodes? I would appreciate any information!
I’ve attached a screenshot of my workflow, a link to the documentation, and a YouTube link where a guy connects QWEN Embedding via Ollama.

From the documentation, it looks like it should be possible to send an http request to the official API for the embedding model.

You should be able (given you have the api key) to send an http request, which would look somewhat like this:

All you need is to send the requests which copy one of the following formats depending on what you are trying to achieve:

Single string input:

curl --location 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1/embeddings' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
    "model": "text-embedding-v3",
    "input": "The quality of the clothes is excellent, very beautiful, worth the wait, I like it and will buy here again",  
    "dimension": "1024",  
    "encoding_format": "float"
}'

String list input

curl --location 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1/embeddings' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
    "model": "text-embedding-v3",
    "input": [
        "Shall I compare thee to a summers day",
        "Thou art more lovely and more temperate", 
        "Rough winds do shake the darling buds of May", 
        "And summers lease hath all too short a date"
        ],
    "encoding_format": "float"
}'

File input

FILE_CONTENT=$(cat texts_to_embedding.txt | jq -Rs .)
curl --location 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1/embeddings' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
    "model": "text-embedding-v3",
    "input": ['"$FILE_CONTENT"']
}'

You should get the response with vectors back.

Thank you for your answer, but I don’t understand how to connect the HTTP Request node to the Pinecone Vector Store node.

Well, since after sending above mentioned requests you would end up with embeddings already and wouldn’t need a pinecone vector store node. You’d have to insert embeddings directly via pinecone API as well.

Or you could do what they did on the video and run this model yourself I guess.

Thank you for your response! I’m just starting to learn the specifics of how n8n works, but the main thing I’ve realized is that you can do without specialized nodes by using the HTTP Request node.

Very true, if there is no native node, you can always (well, almost) fall back to doing it yourself with a good ol’ http request.

Sorry, may I ask you to assemble the workflow as in the picture, but without using the nodes highlighted in yellow? Instead, please use HTTP request nodes.

Sure thing, this is how it would look like:

And this is the workflow

This is how I tested:

  • I uploaded a document to Pinecone
  • I configured AI Agent with instructions for how to get the vector and how to retrieve info from Pinecone based on that vector.

This is what the document looked like (a lot, for a rather simple car fix, right?):

And this is my conversation with the model:

Note: I used 512 dim openai for embedding (both ingesting and and retrieval.

Hope it helps!

Thank you!!!