Issue with LLMChain and AgentNodes. Too slow and error

I have a simple workflow with an LLMChain Node strating with a chat trigger and a Groq Api model
It has been working well but suddenly these things happened:

  1. There is no animation (circle arrow while executing the model no longer appears)
  2. The Flow is ralentized and responds but…very slow…
  3. Flow works but slow and no animation in the model box and no logs at all
  4. I use Docker desktop, and I reinstalled docker, n8n image (i test the latest and many of the previous ones) and and still dows not work
  5. When i look to the losgs in the doccker container i see the error
    25-02-09 20:40:34 Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed
    2025-02-09 20:40:55 Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
    2025-02-09 20:41:06 Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed
    2025-02-09 20:50:32 Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
    2025-02-09 20:50:43 Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

Please help me! I reinstalled everything, review the proxy, the network and do not find anything

3 Likes

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

n8n version latest as per today 9 February. I also test with version 1.74.2 and does not work either
DAtabase (sqllite)
Running via Docker version:Docker version 27.4.0, build bde2b89
Operating System Windows 11

Same here. Simple requests to Ollama take 30+ seconds, while manual requests via curl to the same Ollama run almost instantly (Ollama is running on an RTX 3090). Logs show the same situation:

Feb 15 13:55:40 n8n n8n[648]: Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Feb 15 13:55:56 n8n n8n[648]: Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed
Feb 15 13:56:13 n8n n8n[648]: Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Feb 15 13:56:24 n8n n8n[648]: Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

n8n version: 1.78.1
Database (default: SQLite): SQLite
n8n EXECUTIONS_PROCESS setting (default: own, main): default
Running n8n via (Docker, npm, n8n cloud, desktop app): npm inside Proxmox LXC
Operating system: Debian 12

The configuration is minimal.

N8N_SECURE_COOKIE=false
N8N_PROTOCOL=https
N8N_HOST=n8n.mydomain.local
WEBHOOK_URL=https://n8n.mydomain.public/
WEBHOOK_TUNNEL_URL=https://n8n.mydomain.public

NODE_ENV=production

N8N_METRICS=true
N8N_METRICS_PREFIX=

This isn’t a resource issue - there’s over 64GB of available memory, and the CPU isn’t under any load.

Update:
I set up a simple workflow using “Chat Trigger” and “HTTP Request,” which basically makes a request like this:

curl -s http://my-ollama-ip:11434/api/generate -d '{"model": "mistral-nemo:12b","prompt": "here-is-my-request", "stream": false}'

And the model responds almost instantly.

me two…, i was use n8n cloud vs Self-hosting n8n(docker) to run Same flow, the cloud is very fast, but the Self-hosting n8n(docker) is too slow, and output err log,so why,may be some conf is diff?:joy:
Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed



image

And now does not support streaming output?

Hey guys, i’m experiencing the same problem, any solutions?

I am having the same problem. Any solutions?

1 Like

why is the team not replying

2 Likes

got the same with this env:

  • **n8n version:1.84.3
  • **Database (default: SQLite):default
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):def
  • **Running n8n via (Docker, npm, n8n cloud, desktop app):Docker
  • **Operating system: Sequoia 15.3.2 (24D81)

I run n8n in Docker, ollama as standalone app, they are connected, n8n reach ollama and ollama reach n8n (connection is tested and is ok). LLM is answering, but AI agent is extremely slow.
got the same error:
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

i saw this post:

but i still didnt manage how to edit packages and then run n8n in docker, lack in knowledge, but still searching.
if there is any step-by-step help or other ways to solve - that will be nice

same here:

Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed

I am using n8n queue mode with two runners, but they are very very slow…

Actually i tries the document mentioned GitHub - n8n-io/self-hosted-ai-starter-kit: The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows. and run in windows machine , but always get the same error and the node never move to ollama model at all .

Actually the main reason is ollama has two models in the n8n


Now with all model its working fine, i am not sure what was the issue earlier :frowning:

the docker .yml file
volumes:
n8n_storage:
postgres_storage:
ollama_storage:
qdrant_storage:

networks:
demo:

x-n8n: &service-n8n
image: n8nio/n8n:latest
networks: [‘demo’]
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
- OLLAMA_HOST=host.docker.internal:11434
env_file:
- .env

x-ollama: &service-ollama
image: ollama/ollama:latest
container_name: ollama
networks: [‘demo’]
restart: unless-stopped
ports:
- 11434:11434
volumes:
- ollama_storage:/root/.ollama

x-init-ollama: &init-ollama
image: ollama/ollama:latest
networks: [‘demo’]
container_name: ollama-pull-llama
volumes:
- ollama_storage:/root/.ollama
entrypoint: /bin/sh
environment:
- OLLAMA_HOST=host.docker.internal:11434
command:
- “-c”
- “sleep 3”

services:
postgres:
image: postgres:16-alpine
hostname: postgres
networks: [‘demo’]
restart: unless-stopped
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
volumes:
- postgres_storage:/var/lib/postgresql/data
healthcheck:
test: [‘CMD-SHELL’, ‘pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}’]
interval: 5s
timeout: 5s
retries: 10

n8n-import:
<<: *service-n8n
hostname: n8n-import
container_name: n8n-import
entrypoint: /bin/sh
command:
- “-c”
- “n8n import:credentials --separate --input=/backup/credentials && n8n import:workflow --separate --input=/backup/workflows”
volumes:
- ./n8n/backup:/backup
depends_on:
postgres:
condition: service_healthy

n8n:
<<: *service-n8n
hostname: n8n
container_name: n8n
restart: unless-stopped
ports:
- 5678:5678
volumes:
- n8n_storage:/home/node/.n8n
- ./n8n/backup:/backup
- ./shared:/data/shared
depends_on:
postgres:
condition: service_healthy
n8n-import:
condition: service_completed_successfully

qdrant:
image: qdrant/qdrant
hostname: qdrant
container_name: qdrant
networks: [‘demo’]
restart: unless-stopped
ports:
- 6333:6333
volumes:
- qdrant_storage:/qdrant/storage

ollama-cpu:
profiles: [“cpu”]
<<: *service-ollama

ollama-gpu:
profiles: [“gpu-nvidia”]
<<: *service-ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

ollama-gpu-amd:
profiles: [“gpu-amd”]
<<: *service-ollama
image: ollama/ollama:rocm
devices:
- “/dev/kfd”
- “/dev/dri”

ollama-pull-llama-cpu:
profiles: [“cpu”]
<<: *init-ollama
depends_on:
- ollama-cpu

ollama-pull-llama-gpu:
profiles: [“gpu-nvidia”]
<<: *init-ollama
depends_on:
- ollama-gpu

ollama-pull-llama-gpu-amd:
profiles: [gpu-amd]
<<: *init-ollama
image: ollama/ollama:rocm
depends_on:
- ollama-gpu-amd

and also i locally run the ollam serve command so that it runs locally and run docker using docker compose --profile cpu up

Hello, we have the same issue, at the same previous setting using n8n self-hosted-ai-starter-kit: “Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed”, please help, thank you.