Hello @bartv
I just installed CPU version on my VPN linux instance and I am getting this error every time I chat with llama. [ERROR: llama runner process has terminated: signal: killed]
{
"errorMessage": "Internal error",
"errorDetails": {},
"n8nDetails": {
"n8nVersion": "1.55.3 (Self Hosted)",
"binaryDataMode": "default",
"stackTrace": [
"ResponseError: llama runner process has terminated: signal: killed",
" at checkOk (/usr/local/lib/node_modules/n8n/node_modules/ollama/dist/shared/ollama.384eb0a9.cjs:72:9)",
" at processTicksAndRejections (node:internal/process/task_queues:95:5)",
" at post (/usr/local/lib/node_modules/n8n/node_modules/ollama/dist/shared/ollama.384eb0a9.cjs:119:3)",
" at Ollama.processStreamableRequest (/usr/local/lib/node_modules/n8n/node_modules/ollama/dist/shared/ollama.384eb0a9.cjs:231:25)",
" at ChatOllama._streamResponseChunks (/usr/local/lib/node_modules/n8n/node_modules/@langchain/ollama/dist/chat_models.cjs:474:24)",
" at ChatOllama._generate (/usr/local/lib/node_modules/n8n/node_modules/@langchain/ollama/dist/chat_models.cjs:407:26)",
" at async Promise.allSettled (index 0)",
" at ChatOllama._generateUncached (/usr/local/lib/node_modules/n8n/node_modules/@langchain/core/dist/language_models/chat_models.cjs:177:29)",
" at LLMChain._call (/usr/local/lib/node_modules/n8n/node_modules/langchain/dist/chains/llm_chain.cjs:162:37)",
" at LLMChain.invoke (/usr/local/lib/node_modules/n8n/node_modules/langchain/dist/chains/base.cjs:58:28)"
]
}
}```