Problem using Ollama Local AI Agent in n8n 1.114.0 after removing API Key

Hi!

I’m trying to set up a local Ollama-based AI agent in n8n (version 1.114.0) without using any external API keys.

Environment:

n8n version: 1.114.0

Ollama running locally via docker-compose

No external AI services (OpenAI, Anthropic etc.)

HTTP Request with Bearer token works fine

Data is loaded into NocoDB successfully

The issue appears only when using Ollama Chat Model or LLM Chain

Problem:

When I remove the “API Key” field from the LLM nodes (because Ollama is local and doesn’t require one), n8n throws a connection error:

“The resource you are requesting could not be found” (404)

Or sometimes Ollama returns “Ollama running” on a GET request, and POST is rejected

As a result, the Ollama Chat Model node cannot connect to any upstream chain, and LLM Chain node breaks completely

It looks like:

The LLM node in 1.114.0 forces authentication logic, even for local Ollama

Or it uses an incorrect endpoint (GET instead of POST)

Or something in the internal LLM proxy is not compatible with Ollama running inside Docker container

Question:

What is the correct way to configure the Ollama Chat Model (or LLM Chain) in n8n 1.114.0 so it works with a local Ollama instance without requiring an API key?

Do I need:

a specific base URL format?

a custom header?

a separate LLM agent configuration?

a workaround for the 404 / GET–POST mismatch?

Any official guidance or fixes for Ollama ↔️︎ n8n integration would be greatly appreciated.