Describe the problem/error/question
My issue is that I want to have access to the reasoning of the Answer Correctness evaluation in subsequent nodes. However it is only available in the OpenAI Chat Model node and can not be extracted. Only params are available:
What is the error message (if any)?
Please share your workflow
Information on your n8n setup
- n8n version: latest
- Database (default: SQLite): -
- n8n EXECUTIONS_PROCESS setting (default: own, main): -
- Running n8n via (Docker, npm, n8n cloud, desktop app): Self-hosted Docker
- Operating system: TrueNAS Scale
I cant edit the post anymore for some reason. The pasted workflow is a little messed up, the screenshots describes my issue well.
Hi @drandarov-io , welcome to the n8n community!
I think what’s happening is that the OpenAI Chat Model is attached here as a sub-node of the Evaluation node, not running as a normal node in the main workflow path.
In my experience, sub-nodes in n8n don’t expose a normal downstream output the way regular nodes do, so when I reference them later I usually only see config or params, not the model’s internal reasoning as reusable workflow data. So I don’t think you’re missing a simple expression here.
I think that reasoning just isn’t exposed that way right now. If I needed to use it in later nodes, I’d probably add a separate normal LLM step that generates the justification explicitly and saves it as regular output.
1 Like
Thats unfortunate. Having access to the reasoning would feel quite useful to me. Maybe this can be added as a feature.
Having an option to expose these would be great:
I really like that idea
this feels like the kind of improvement that could add a lot of value to the Evaluation node 
yeah, separate lm step is the move. we did the same thing when we needed to expose reasoning — just ran a quick followup to the model asking it to summarize the justification. bit more api calls but its the only reliable way to get it as usable data in your workflow.
1 Like
You’re very close — this is not a bug, but a limitation of how the Evaluation node exposes data.
What’s happening
In your flow:
Message a model → Evaluation1 → Save reasoning summary?
-
The Evaluation1 node does perform reasoning internally (via the model you attached)
-
However, this reasoning is NOT exposed in the node’s output JSON
-
It is only used internally to compute metrics
So after Evaluation1, the output only contains structured metrics — not the reasoning text.
Root cause
The reasoning is not accessible because the Evaluation node does not expose it in $json
So it’s not:
-
wrong field reference 
-
or missing mapping 
It’s simply:
the node does not return reasoning at all
Where the data is “lost”
The reasoning is generated inside:
Evaluation1 (setMetrics)
But never appears in:
$json → passed to next node
So by the time you reach:
Save reasoning summary?
reasoning is already gone (never existed in output)
Minimal fix (no redesign)
You have 2 practical options:
Option 1 (recommended)
Move the reasoning step OUT of the Evaluation node:
Then you’ll get:
$json.output[0].content[0].text
usable in downstream nodes
Option 2 (hacky / limited)
Duplicate the evaluation prompt manually in another LLM node
(basically recreating what Evaluation node does internally)
Key takeaway
The Evaluation node is designed for metrics, not explainability output.
If you need reasoning:
you must generate it explicitly in a separate node.
nice breakdown @erwin_burhanudin — that really makes it clear where the data just disappears in the flow. option 1 is definitely the cleaner path, especially if you want the reasoning captured in the datatable for auditing later.