Summarize chain timeout

Describe the problem/error/question

Hi all, I am testing the new Summarize chain with local ollama+phi3, but the node is erroring with a timeout (the server running the model is quite slow), but there is no way to setup a timeout value, which renders this node useless in my use case. Is there to workaround this?

What is the error message (if any)?

{
  "errorMessage": "fetch failed",
  "errorDetails": {},
  "n8nDetails": {
    "n8nVersion": "1.39.1 (Self Hosted)",
    "binaryDataMode": "default",
    "cause": {
      "name": "HeadersTimeoutError",
      "code": "UND_ERR_HEADERS_TIMEOUT",
      "message": "Headers Timeout Error"
    },
    "stackTrace": [
      "TypeError: fetch failed",
      "    at Object.fetch (node:internal/deps/undici/undici:11118:11)",
      "    at createOllamaStream (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\@langchain\\community\\dist\\utils\\ollama.cjs:12:22)",
      "    at createOllamaGenerateStream (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\@langchain\\community\\dist\\utils\\ollama.cjs:57:5)",
      "    at Proxy._streamResponseChunks (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\@langchain\\community\\dist\\llms\\ollama.cjs:346:26)",
      "    at Proxy._call (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\@langchain\\community\\dist\\llms\\ollama.cjs:376:26)",
      "    at async Promise.all (index 0)",
      "    at Proxy._generate (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\@langchain\\community\\node_modules\\@langchain\\core\\dist\\language_models\\llms.cjs:323:29)",
      "    at Proxy._generateUncached (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\@langchain\\community\\node_modules\\@langchain\\core\\dist\\language_models\\llms.cjs:138:22)",
      "    at LLMChain._call (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\langchain\\dist\\chains\\llm_chain.cjs:157:37)",
      "    at LLMChain.invoke (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\langchain\\dist\\chains\\base.cjs:58:28)",
      "    at StuffDocumentsChain._call (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\langchain\\dist\\chains\\combine_docs_chain.cjs:62:24)",
      "    at StuffDocumentsChain.invoke (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\langchain\\dist\\chains\\base.cjs:58:28)",
      "    at MapReduceDocumentsChain._call (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\langchain\\dist\\chains\\combine_docs_chain.cjs:210:24)",
      "    at MapReduceDocumentsChain.invoke (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\langchain\\dist\\chains\\base.cjs:58:28)",
      "    at Object.execute (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\@n8n\\n8n-nodes-langchain\\nodes\\chains\\ChainSummarization\\V2\\ChainSummarizationV2.node.ts:369:23)",
      "    at Workflow.runNode (C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\n8n-workflow\\src\\Workflow.ts:1378:8)",
      "    at C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\n8n-core\\src\\WorkflowExecute.ts:1050:29",
      "    at C:\\Users\\matias\\AppData\\Roaming\\npm\\node_modules\\n8n\\node_modules\\n8n-core\\src\\WorkflowExecute.ts:1726:11"
    ]
  }
}

Information on your n8n setup

  • n8n version: n8n Version 1.39.1
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: windows 10

Hi @Matias_Hegoburu,

Welcome to the community! :dizzy:

Based on your description, the best way to work aroud this would have to rely either on how you’re making the request to the server or on the server capabilities.

Could you share a bit more about how you are making your request?

Ho @mariana-na ! Thanks for answering! I simply dragged the “Summarize Chain” node into the workflow, and hooked it up to an ollama subnode, and added a hardcoded string as the input… no custom code whatsoever… and it times out with the error above, after some minutes (didnt count, lets say 15)… the interesting thing is that the Basic LLM chain, with the same setup, works just fine, but it requires some extra work on my part to achieve the same result, so for now thats what i am doing, but it is a shame that the Summarize chain does not work

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.