My computer system is Win11, and I have deployed n8n locally using Docker desktop. At the same time, I have installed Olama in my local Windows and configured the host= http://host.docker.internal:11434 , N8n shows that ollama embeddings are successfully connected, but the automatically selected llama3.2 prompts that the value “llama3.2” is not supported! How can I configure the embedding model of ollama to implement Rag? help me!!!




