I have gpt-oss LLM in local llama-cpp inference server and local n8n docker running. Since there is no llama-cpp node in n8n, I use openai chat model node. The problem is: “<|channel|>” reasoning parts of the answer is visible on n8n chat window in my case. In youtube videos, guys use gpt-oss with ollama chat model node, this answer parts are not seen in chat window.
I think ollama chat model node handles the reasoning tokens. If it is true, could you please add support for llama-cpp as well? Thanks.
- n8n version: latest
- Database (default: SQLite): no
- n8n EXECUTIONS_PROCESS setting (default: own, main): default
- Running n8n via (Docker, npm, n8n cloud, desktop app): docker
- Operating system: linux