Llama3.2 Bug?

I’ve been having a frequent inconsistent error where when I run a basic LLM chain with the llama3.2, it fails with the message “Did not receive done or success response in stream”.

Here is the full stack trace if it helps:

Error: Did not receive done or success response in stream. at AbortableAsyncIterator.[Symbol.asyncIterator] (/usr/local/lib/node_modules/n8n/node_modules/ollama/dist/shared/ollama.11c1a3a8.cjs:47:11) at processTicksAndRejections (node:internal/process/task_queues:95:5) at ChatOllama._streamResponseChunks (/usr/local/lib/node_modules/n8n/node_modules/@langchain/ollama/dist/chat_models.cjs:760:26) at ChatOllama._generate (/usr/local/lib/node_modules/n8n/node_modules/@langchain/ollama/dist/chat_models.cjs:687:26) at async Promise.allSettled (index 0) at ChatOllama._generateUncached (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/language_models/chat_models.cjs:215:29) at LLMChain._call (/usr/local/lib/node_modules/n8n/node_modules/langchain/dist/chains/llm_chain.cjs:162:37) at LLMChain.invoke (/usr/local/lib/node_modules/n8n/node_modules/langchain/dist/chains/base.cjs:58:28) at createSimpleLLMChain (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/chains/ChainLLM/ChainLlm.node.js:100:23) at getChain (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/chains/ChainLLM/ChainLlm.node.js:109:16)

I have a feeling this error is indicating that the llama model is not responding, which is strange as this appears to happen randomly. I don’t know how to fix it, i kind of just wait a while and test it again later and it miraculously works. I’m not sure if this is a bug with n8n or a problem with my own machine. I’m running a Macbook pro m2 2022 (8gb).

If anyone else has experienced this or if anyone knows what the problem could be, please reply to this post. Thanks.

Information on your n8n setup

  • n8n version: 1.72.1
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker self-hosted
  • Operating system: MacOS

Hi @Asadrz11014

I would try a few things here,

  • update n8n and Ollama and make sure to use the latest Llama 3.2 model
  • restart the services
  • perhaps try running ollama separately and connect it via https://host.docker.internal:11434

You may also check out this similar community post:

1 Like

Hi, thank you for the reply!
So I already run ollama separately and have it connected the way you mentioned. I have tried to update the n8n version but for some reason after Im pull from docker, when i open n8n locally it still shows me the old version.

Have you stopped and started the container?
You need to restart the container and specifiy the version.

if you need help with that, let us know how you’re set up with docker (like docker compose, etc)

1 Like

Hi sorry for the late reply. Yes I have, it doesn’t make a difference unfortunately.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.