How to show reasoning?

expected (llama-server):

actual (n8n):

I would like to see the reasoning output similar to how llama-server shows it, but I can’t find an option in n8n to enable it. Is it not possible?

workflow

Share the output returned by the last node

actual: “Hello! How can I help you today?”
expected: “”"

  1. Analyze the input: The user just said “hi”. This is a standard greeting.
  2. Determine the intent: The user wants to start a conversation or get a greeting response.
  3. Formulate the response:
    • Acknowledge the greeting.
    • Offer assistance.
    • Keep it friendly and open-ended.
  4. Drafting options:
    • Option 1 (Simple): Hi there! How can I help you?
    • Option 2 (Casual): Hello! What’s up?
    • Option 3 (Formal): Greetings. How may I be of service?
  5. Selecting the best option: Option 1 is the most versatile and helpful standard AI response. It’s polite and inviting.
  6. Final Polish: “Hi there! How can I help you today?”
    Hi there! How can I help you today?
    “”"

Information on your n8n setup

  • n8n version: 2.4.6
  • Database (default: SQLite): postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: ubuntu

Hi, @Mtrl_Scientist !

No, in n8n this is not possible and it’s by design, not a missing toggle.

One viable workaround is to use a structured prompt that forces the model to externalize its reasoning as regular text. This approach does not expose the model’s internal chain-of-thought.
It produces an explicit, user-visible explanation, which is the recommended and compliant method for debugging, auditing, and governance in production environments.

I think it depends on your model and AI provider… Does the AI model you picked have advanced thinking by default? But you can also use the tool “think” and you can see the model thinking inside it. Thinking does not go to output by default here though. (which is good). I can also see all tool calls it makes at the end of the Ai inputs.

That is disappointing…

I disagree that the absence of the reasoning output makes debugging easier. I’d argue it’s the exact opposite. When it makes a mistake, we need to know why it came to that conclusions. Conversely, when grading a student’s exam, it doesn’t matter if they got the answer right, but for the wrong reasons; they’d still get zero points.

It’s also not standard practice to omit the reasoning output. All major LLM providers show you the thinking process. I’m not sure why you’re so determined to not even offer a toggle.

I’m aware of the thinking tool, and forcing structured output, but this just unnecessarily doubles the time to generate the response with no guarantee the model’s going to use the tool or output the reasoning verbatim.

I hope you will reconsider, as I’m sure I’m not the only person who would like to inspect the thinking process.