Describe the problem/error/question
The connected llm model node runs twice.
- for the tool-call.
- for the text response to the user.
This is expected and correct behaviour.
But in some cases the llm…
a) includes its text response to the user in the run that is reserved for the toolcall
or
b) includes a text response to the user in both runs.
Here is a comparison.
Correct behaviour (works like this in most cases):
Incorrect behaviour (Example 1):
Text response already included in the first toolcall run.
Text response in second run seems to take that into account.
Both text responses are sent to the user.
Result: Webhook receives both responses. Which doesnt make sense to the user.
Incorrect behaviour (Example 2):
Text response already included in the first toolcall run.
Text response in second run seems to take that into account.
Only text response from second run is sent to the user (which then doesnt make sense).
Result: Same as above - Webhook receives both responses. Which doesnt make sense to the user.
What is the error message (if any)?
No “error message”, but strange outputs to the user.
Please share your workflow
Nothing special here - Just an AI node with a tool + memory.
Question
Is this issue somehow known?
Any ideas or recommendations on how to fix this?
Information on your n8n setup
- n8n version: 1.113.3 (7 Versions behind)
- Database (default: SQLite): cloud version





