UND_ERR_HEADERS_TIMEOUT on Basic LLM Chain or Agent also

Describe the problem/error/question

Doing some tests with ollama (docker lastest) and n8n:
Random errors at LLM chain, for this error, when happens it do (mostly for i can saw) at exact 5 minutes. Other time, it is running OK.

What is the error message (if any)?

fetch failed
Error details

 Other info
n8n version

1.42.1 (Self Hosted)

Error cause

{ "name": "HeadersTimeoutError", "code": "UND_ERR_HEADERS_TIMEOUT", "message": "Headers Timeout Error" }
Stack trace

TypeError: fetch failed at node:internal/deps/undici/undici:12618:11 at createOllamaStream (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/utils/ollama.cjs:12:22) at createOllamaChatStream (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/utils/ollama.cjs:61:5) at ChatOllama._streamResponseChunks (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/chat_models/ollama.cjs:399:30) at ChatOllama._call (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/chat_models/ollama.cjs:507:26) at ChatOllama._generate (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/language_models/chat_models.cjs:368:22) at async Promise.allSettled (index 0) at ChatOllama._generateUncached (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/language_models/chat_models.cjs:118:25) at LLMChain._call (/usr/local/lib/node_modules/n8n/node_modules/langchain/dist/chains/llm_chain.cjs:157:37) at LLMChain.invoke (/usr/local/lib/node_modules/n8n/node_modules/langchain/dist/chains/base.cjs:58:28)

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.42.1
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: debian12

Hey @miko,

It looks like ollama is too slow to respond, If you are running it locally it could be that the hardware it is running on needs to be updated to handle it.

One thing you could do to work around this though is in the Ollama node if you go to Options you should be able to add a Keep Alive setting which has a default of 5 minutes. Maybe changing this to 10m would solve the issue for now.

I set for 1h on n8n and remains the same with the random error.

Diggin a little, i found this

also on ollama env, i set OLLAMA_KEEP_ALIVE to 1h.

I dont know what can be happen

1 Like

@miko did you ever figure this one out? I seem to be having a similar error on my system.

The Keep Alive setting seems to be working correctly as when I run ollama ps I can see that it’s alive for longer than the default 5 minutes:

NAME           	ID          	SIZE  	PROCESSOR	UNTIL               
llama3.1:latest	62757c860e01	6.2 GB	100% CPU 	40 minutes from now	

And yet after 5 minutes of the LLM workflow running in n8n, I get this error:

{
  "errorMessage": "Internal error",
  "errorDetails": {},
  "n8nDetails": {
    "n8nVersion": "1.51.1 (Self Hosted)",
    "binaryDataMode": "default",
    "stackTrace": [
      "TypeError: fetch failed",
      "    at node:internal/deps/undici/undici:12502:13",
      "    at createOllamaStream (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/utils/ollama.cjs:12:22)",
      "    at createOllamaChatStream (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/utils/ollama.cjs:61:5)",
      "    at ChatOllama._streamResponseChunks (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/chat_models/ollama.cjs:410:30)",
      "    at ChatOllama._call (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/dist/chat_models/ollama.cjs:518:26)",
      "    at ChatOllama._generate (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/node_modules/@langchain/core/dist/language_models/chat_models.cjs:507:22)",
      "    at async Promise.allSettled (index 0)",
      "    at ChatOllama._generateUncached (/usr/local/lib/node_modules/n8n/node_modules/@langchain/community/node_modules/@langchain/core/dist/language_models/chat_models.cjs:177:29)",
      "    at LLMChain._call (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/node_modules/langchain/dist/chains/llm_chain.cjs:162:37)",
      "    at LLMChain.invoke (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/node_modules/langchain/dist/chains/base.cjs:58:28)"
    ]
  }
}

It works just fine with smaller prompts that take less than 5 minutes, but anything longer and it fails with the same error, even though the llama instance is still alive.

no, it seems it is a problem that was never resolved… i had issues with other AI nodes also. BTW: all examples are with opeanai, it seems for me that the focus now is there…

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.