[Bug/Help] Embeddings Ollama + Qdrant Vector Store — "invalid input type" on /api/embed

Environment

  • n8n version: 2.9.4 (Self Hosted)

  • OS: Windows 10

  • Ollama version: 0.17.4

  • Embedding model: nomic-embed-text:latest

  • Vector store: Qdrant (Docker, localhost:6333)

  • LLM: OpenRouter (claude-3-haiku)

Workflow Structure

ChatTrigger → AI Agent → tesla_q (toolVectorStore)
                              ↓
                     Qdrant Vector Store
                              ↓
                     Embeddings Ollama (nomic-embed-text)

Problem

When sending a chat message, the tesla_q vector store tool fails with:

invalid input type

Ollama logs show repeated 400 responses on POST /api/embed with 0s response time, meaning the request is rejected immediately:

[GIN] 2026/03/02 - 11:31:50 | 400 | 0s | 127.0.0.1 | POST "/api/embed"

Ollama is running (ollama serve active, model loaded successfully — confirmed by occasional 200 on /api/embed).

What I’ve Tried

  • Changed topK from $fromAI() expression to hardcoded 4

  • Changed model name from nomic-embed-text:v1.5nomic-embed-text:latestnomic-embed-text

  • Switched vector store from Pinecone (768-dim, Dense, cosine) to Qdrant

  • Verified Ollama credential URL is http://127.0.0.1:11434

  • Confirmed Qdrant collection exists and documents are ingested

  • Added "name" parameter to toolVectorStore node

Question

What is n8n sending to /api/embed that causes Ollama to return 400 invalid input type? Is there a known compatibility issue between embeddingsOllama node and toolVectorStore in n8n 2.9.x?

Any working example of Ollama embeddings + Qdrant + toolVectorStore would be very helpful.

1 Like

Hi @Anis0315 Welcome to the community!
A know workaround for this problem is to replace the Ollama chat model with the OpenAI chat model node pointer at your local Ollama instance i mean set the baseURL to something like http://127.0.0.1:11434

2 Likes

This is actually a known bug in n8n, the Embeddings Ollama node sends [null] as the input to /api/embed instead of the actual query text when it’s used inside a toolVectorStore, so Ollama rejects it immediately. There’s an open issue tracking it here: Qdrant\Ollama embeddings not working · Issue #13409 · n8n-io/n8n · GitHub

1 Like

This is a known bug in n8n’s Embeddings Ollama node, it ends up sending input:[null] to Ollama’s /api/embed instead of the actual text, especially when used inside a toolVectorStore with streaming enabled. The workaround that actually works is using the OpenAI compatible node pointed at your Ollama instance (http://127.0.0.1:11434/v1) for embeddings instead of the native Ollama node.

1 Like

This is a known bug where n8n sends [null] as the input to /api/embed instead of the actual query text, there’s a GitHub issue tracking it (#13409). Quickest fix is to bypass the Embeddings Ollama node entirely and just hit Ollama’s /api/embed endpoint directly with an HTTP Request node, that works fine since the problem is in n8n’s node not Ollama itself.

1 Like

Update — Additional Steps Tried (Still Unresolved)

Following up with everything I’ve attempted since the original post:

1. Hardcoded topK Replaced the $fromAI() expression with a fixed value of 4. No change.

2. Model name variations Tried all three formats for the embedding model:

  • nomic-embed-text:v1.5

  • nomic-embed-text:latest

  • nomic-embed-text

All produce the same 400 on /api/embed.

3. Switched vector store: Pinecone → Qdrant Set up Qdrant locally via Docker (localhost:6333), created a collection, re-ingested documents using Embeddings Ollama + Qdrant Vector Store. Same error persists when querying.

4. Added name parameter to toolVectorStore node The node was missing "name": "tesla_q" — added it. No change to the error.

5. Switched Embeddings OllamaEmbeddings OpenAI node pointed at Ollama Used the OpenAI-compatible endpoint http://127.0.0.1:11434/v1 with the OpenAI embeddings node. Got a new error:

TypeError: Cannot read properties of undefined (reading 'replace')
at embedQuery (embeddings.ts:204)

The model name resolves correctly from the dropdown, but the embedding call still fails.

6. Downgraded toolVectorStore typeVersion from 1.1 to 1 Matched the version used in working tutorials. No change.


Current state: Ollama is running, model is loaded, Qdrant has data — but every /api/embed call from n8n returns 400 instantly.

Any insight into what payload n8n is sending to /api/embed would help narrow this down. Is there a way to log the raw request body?

1 Like

@Anis0315 i guess currently we cannot do that directly, instead just let your Ollama model have the JSON body you expect and execute it manually, and what i have turned on for debugging self hosted n8n docker is this N8N_LOG_LEVEL=debug

:white_check_mark: Solution — That actually work

After extensive troubleshooting, I found a working solution for using Ollama embeddings (nomic-embed-text) with Qdrant in n8n — without using the native Embeddings Ollama node.


The Fix

Use Embeddings OpenAI node instead of Embeddings Ollama, pointed at Ollama’s OpenAI-compatible endpoint.

Configuration:

  • Node: Embeddings OpenAI

  • Base URL (in credential): http://127.0.0.1:11434/v1

  • API Key: anything (e.g. ollama)

  • Model: nomic-embed-text (or nomic-embed-text:latest)

  • Options → Dimensions: 768

The Dimensions field is the critical part — without it the workflow fails.


Why the Native Ollama Node Fails

The Embeddings Ollama node sends requests to /api/embed — Ollama’s native endpoint. In n8n 2.9.x, this node sends a malformed or incomplete payload, causing Ollama to immediately reject it with 400 invalid input type. The response time is 0s, meaning Ollama rejects the request before even processing it.

This is a compatibility issue between n8n’s Embeddings Ollama node and the Ollama API — not an Ollama configuration problem.


Why the OpenAI Node Works

Ollama exposes an OpenAI-compatible endpoint at /v1/embeddings. The Embeddings OpenAI node sends a clean, well-structured request that this endpoint fully understands.


Why Dimensions: {{768}} Is Required

The Embeddings OpenAI node was built for OpenAI models like text-embedding-3-small, which support a dimensions parameter. Without explicitly setting it, the node sends dimensions: undefined in the request body — Ollama receives that and returns 400 invalid input type because it doesn’t know what size vector to output.

By setting {{768}} (the native output size of nomic-embed-text), you send a complete, valid request that Ollama accepts and your Qdrant collection matches.


Summary

Approach Result Reason
Embeddings Ollama node :cross_mark: 400 error Malformed payload to /api/embed
Embeddings OpenAI node (no dimensions) :cross_mark: 400 error Sends undefined dimensions
Embeddings OpenAI node + {{768}} :white_check_mark: Works Clean, complete request Ollama accepts

Full Working Stack

  • LLM: OpenRouter (or any provider)

  • Embeddings: Embeddings OpenAI node → Ollama (/v1) → nomic-embed-text

  • Vector Store: Qdrant (Docker, localhost:6333)

  • Tool: toolVectorStore with hardcoded topK: 4

Hope this saves someone hours of debugging!

1 Like