The workfow return "fetch failed " when using ollama local model

The workfow return error when using ollama local model,while online deepseek works good.

error message:fetch failed,

Please share your workflow

Share the output returned by the last node

Error running node ‘AI Agent’
Stack trace

TypeError: fetch failed at node:internal/deps/undici/undici:13510:13 at post (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/ollama/dist/shared/ollama.9c897541.cjs:114:20) at Ollama.processStreamableRequest (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/ollama/dist/shared/ollama.9c897541.cjs:253:22) at ChatOllama._streamResponseChunks (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected][email protected]__/node_modules/@langchain/ollama/dist/chat_models.cjs:735:32) at ChatOllama._streamIterator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/language_models/chat_models.cjs:100:34) at ChatOllama.transform (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/base.cjs:402:9) at RunnableBinding.transform (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/base.cjs:912:9) at ToolCallingAgentOutputParser.transform (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/base.cjs:391:26) at RunnableSequence._streamIterator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/base.cjs:1349:30) at RunnableSequence.transform (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/base.cjs:402:9)

Information on your n8n setup

  • n8n version: 1.97.1
  • Database (default: SQLite): postgres:13-alpine
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: ubuntu22.04
  • web page on : LAN windows10 edge

thanks for your reply.
Connetion to local ollama tested successfully,and IP is host Lan ip in my current setting. I’ll try expose ollama to 0.0.0.0

1 Like

The same after expose ollama to 0.0.0.0
(base) root@chaowei407-W580G20:~# ss -tuln | grep 11434
tcp LISTEN 0 4096 *:11434 :

nonnection is fine! may be question too complicated for small size model, simple question was reponsed. prompts needs to be improved?
responsed for simple question:


return:
\nOkay, the user wants me to act as a text extraction assistant. Let me make sure I understand the task correctly. The user provided a query where they want me to extract specific information from a given text. If the information isn’t present, I should return an empty string for that value.\n\nFirst, I need to check if the user has provided the text content that needs analysis. Looking back at the user’s message, they wrote “answer user’s question” but didn’t include the actual text to analyze. Without the text, I can’t proceed to extract any information. \n\nWait, maybe the user expects me to use a previous interaction’s text? But in the current conversation history, there’s no prior text provided. The user’s latest message is just the instruction. \n\nSo, in this case, since there’s no text content to analyze, I should inform them that the text is missing. However, the user’s initial instruction says if information isn’t found, return an empty string. But the problem here is that there’s no text at all. \n\nHmm, maybe the user made a mistake and forgot to include the text. The correct approach is to point out that the text content is missing. But according to the initial instructions, I should extract information from the text content provided. Since there’s no text, I can’t extract anything. \n\nTherefore, the proper response is to indicate that the text is missing. But the user’s instructions say to return empty strings if information isn’t found. However, in this case, the absence of text is a more fundamental issue. \n\nWait, maybe the user intended to provide the text in their question but forgot. In that case, I should ask for the text. But according to the initial setup, I’m supposed to analyze the text content they provide. Since there’s no text, I can’t extract any information. \n\nSo, the answer should be that the text is missing. But the user’s instructions might expect me to return empty strings. However, without the text, all fields would be empty. But the user might need to be informed that the text is missing. \n\nBut the initial instructions say to return empty strings if the information isn’t found. If the text is missing, then all extracted values would be empty. However, the user might not have realized they forgot to include the text. \n\nIn this case, perhaps the best approach is to return empty strings for all fields, but also mention that the text is missing. But according to the initial instructions, I should just extract the information and return empty strings if not found. \n\nWait, the user’s instruction says: “If any information is not found in the text, return an empty string for the corresponding value.” But if there’s no text, then all information is not found. So, all values would be empty strings. \n\nBut the user might not have provided the text. However, the assistant’s role is to process the given text. Since the text isn’t provided, the assistant can’t extract anything. Therefore, the answer would be empty strings for all required fields. \n\nBut the user might have intended to provide the text in their question. However, in the current message, they only wrote “answer user’s question” without any text. \n\nTherefore, the correct response is to return empty strings for all extracted information, as there’s no text to analyze. However, the assistant might need to inform the user that the text is missing. But according to the initial instructions, the assistant should just extract the information and return empty strings if not found. \n\nSo, the final answer would be empty strings for all fields, but the assistant might need to clarify. However, based on the given instructions, the assistant should proceed with the extraction, which in this case results in empty strings.\n\n\nThe text content to be analyzed was not provided in the user’s query. Please supply the text for analysis, and I will extract the required information accordingly.