Hey Aditya! If you’re trying to build a RAG system with Azure AI Search in n8n, here’s the general path:
How to vectorize the query:
n8n doesn’t have a native Azure embedding node yet, but you can use an HTTP Request Node to hit your embedding model endpoint in Azure OpenAI (or any compatible embedding model).
For example: text-embedding-ada-002, if that’s available in your setup.
Pull data from Azure AI Search:
After generating the vector, use another HTTP Request Node to query your Azure AI Search index, passing the vector like this: