No, in n8n this is not possible and it’s by design, not a missing toggle.
One viable workaround is to use a structured prompt that forces the model to externalize its reasoning as regular text. This approach does not expose the model’s internal chain-of-thought.
It produces an explicit, user-visible explanation, which is the recommended and compliant method for debugging, auditing, and governance in production environments.
I think it depends on your model and AI provider… Does the AI model you picked have advanced thinking by default? But you can also use the tool “think” and you can see the model thinking inside it. Thinking does not go to output by default here though. (which is good). I can also see all tool calls it makes at the end of the Ai inputs.
I disagree that the absence of the reasoning output makes debugging easier. I’d argue it’s the exact opposite. When it makes a mistake, we need to know why it came to that conclusions. Conversely, when grading a student’s exam, it doesn’t matter if they got the answer right, but for the wrong reasons; they’d still get zero points.
It’s also not standard practice to omit the reasoning output. All major LLM providers show you the thinking process. I’m not sure why you’re so determined to not even offer a toggle.
I’m aware of the thinking tool, and forcing structured output, but this just unnecessarily doubles the time to generate the response with no guarantee the model’s going to use the tool or output the reasoning verbatim.
I hope you will reconsider, as I’m sure I’m not the only person who would like to inspect the thinking process.