Ollama is running (ollama serve active, model loaded successfully — confirmed by occasional 200 on /api/embed).
What I’ve Tried
Changed topK from $fromAI() expression to hardcoded 4
Changed model name from nomic-embed-text:v1.5 → nomic-embed-text:latest → nomic-embed-text
Switched vector store from Pinecone (768-dim, Dense, cosine) to Qdrant
Verified Ollama credential URL is http://127.0.0.1:11434
Confirmed Qdrant collection exists and documents are ingested
Added "name" parameter to toolVectorStore node
Question
What is n8n sending to /api/embed that causes Ollama to return 400 invalid input type? Is there a known compatibility issue between embeddingsOllama node and toolVectorStore in n8n 2.9.x?
Any working example of Ollama embeddings + Qdrant + toolVectorStore would be very helpful.
Hi @Anis0315 Welcome to the community!
A know workaround for this problem is to replace the Ollama chat model with the OpenAI chat model node pointer at your local Ollama instance i mean set the baseURL to something like http://127.0.0.1:11434
This is actually a known bug in n8n, the Embeddings Ollama node sends [null] as the input to /api/embed instead of the actual query text when it’s used inside a toolVectorStore, so Ollama rejects it immediately. There’s an open issue tracking it here: Qdrant\Ollama embeddings not working · Issue #13409 · n8n-io/n8n · GitHub
This is a known bug in n8n’s Embeddings Ollama node, it ends up sending input:[null] to Ollama’s /api/embed instead of the actual text, especially when used inside a toolVectorStore with streaming enabled. The workaround that actually works is using the OpenAI compatible node pointed at your Ollama instance (http://127.0.0.1:11434/v1) for embeddings instead of the native Ollama node.
This is a known bug where n8n sends [null] as the input to /api/embed instead of the actual query text, there’s a GitHub issue tracking it (#13409). Quickest fix is to bypass the Embeddings Ollama node entirely and just hit Ollama’s /api/embed endpoint directly with an HTTP Request node, that works fine since the problem is in n8n’s node not Ollama itself.
Following up with everything I’ve attempted since the original post:
1. Hardcoded topK Replaced the $fromAI() expression with a fixed value of 4. No change.
2. Model name variations Tried all three formats for the embedding model:
nomic-embed-text:v1.5
nomic-embed-text:latest
nomic-embed-text
All produce the same 400 on /api/embed.
3. Switched vector store: Pinecone → Qdrant Set up Qdrant locally via Docker (localhost:6333), created a collection, re-ingested documents using Embeddings Ollama + Qdrant Vector Store. Same error persists when querying.
4. Added name parameter to toolVectorStore node The node was missing "name": "tesla_q" — added it. No change to the error.
5. Switched Embeddings Ollama → Embeddings OpenAI node pointed at Ollama Used the OpenAI-compatible endpoint http://127.0.0.1:11434/v1 with the OpenAI embeddings node. Got a new error:
TypeError: Cannot read properties of undefined (reading 'replace')
at embedQuery (embeddings.ts:204)
The model name resolves correctly from the dropdown, but the embedding call still fails.
6. Downgraded toolVectorStore typeVersion from 1.1 to 1 Matched the version used in working tutorials. No change.
Current state: Ollama is running, model is loaded, Qdrant has data — but every /api/embed call from n8n returns 400 instantly.
Any insight into what payload n8n is sending to /api/embed would help narrow this down. Is there a way to log the raw request body?
@Anis0315 i guess currently we cannot do that directly, instead just let your Ollama model have the JSON body you expect and execute it manually, and what i have turned on for debugging self hosted n8n docker is this N8N_LOG_LEVEL=debug
After extensive troubleshooting, I found a working solution for using Ollama embeddings (nomic-embed-text) with Qdrant in n8n — without using the native Embeddings Ollama node.
The Fix
Use Embeddings OpenAI node instead of Embeddings Ollama, pointed at Ollama’s OpenAI-compatible endpoint.
Configuration:
Node: Embeddings OpenAI
Base URL (in credential): http://127.0.0.1:11434/v1
The Dimensions field is the critical part — without it the workflow fails.
Why the Native Ollama Node Fails
The Embeddings Ollama node sends requests to /api/embed — Ollama’s native endpoint. In n8n 2.9.x, this node sends a malformed or incomplete payload, causing Ollama to immediately reject it with 400 invalid input type. The response time is 0s, meaning Ollama rejects the request before even processing it.
This is a compatibility issue between n8n’s Embeddings Ollama node and the Ollama API — not an Ollama configuration problem.
Why the OpenAI Node Works
Ollama exposes an OpenAI-compatible endpoint at /v1/embeddings. The Embeddings OpenAI node sends a clean, well-structured request that this endpoint fully understands.
Why Dimensions: {{768}} Is Required
The Embeddings OpenAI node was built for OpenAI models like text-embedding-3-small, which support a dimensions parameter. Without explicitly setting it, the node sends dimensions: undefined in the request body — Ollama receives that and returns 400 invalid input type because it doesn’t know what size vector to output.
By setting {{768}} (the native output size of nomic-embed-text), you send a complete, valid request that Ollama accepts and your Qdrant collection matches.