How to use Olama to implement a Rag Q&A system?

My computer system is Win11, and I have deployed n8n locally using Docker desktop. At the same time, I have installed Olama in my local Windows and configured the host= http://host.docker.internal:11434 , N8n shows that ollama embeddings are successfully connected, but the automatically selected llama3.2 prompts that the value “llama3.2” is not supported! How can I configure the embedding model of ollama to implement Rag? help me!!!

Haha, I solved this problem myself!

Need to pull an embedded model from ollama,I have chosen nominal embedded text

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.