my workflow are executing the tools multiple times.
i have used qwen and gpt oss 120b both have the same issue.
i tried this prompt to test “do not ever use any tool more then once.
you must only use the tool once.
do not repeat any tool.”
You can try setting Max Iterations under Options to less than 10 to try and decrease this. Also, you can experiment with Run Once from the Settings of the AI node ![]()
Max Iterations is not a solution if i have more then one tool also the last message will be “Agent stopped due to max iterations.” no matter what is the response.
Run Once is not working also,
I am experiencing the same issue and I think it is related to GPT 5 Mini Model. I could be wrong. What model are you using? I haven’t found a solution in a week.
I tried reducing max interations (obviously It doesn’t make sense). I tried ordering the tool to be used max once in the prompt and the tool description (Didn’t work).
Hi @Ahmed9 I think this is tricky..
I don’t think there’s a perfect way to force the model to call a tool only once, aside from writing a good prompt, using the think tool, and similar prompt-engineering tricks..
Even if you configure the tool itself to limit the number of calls, the AI agent can still attempt to call it again and just receive a response saying it has already been called. That makes it a complex solution..
However, there’s an idea that came to my mind with the new feature that allows you to add an AI Agent Tool as a tool,
In this tool’s configuration, you can set Max Iterations = 1
This ensures the tool will only be called once with a simple prompt. If the main agent tries to call it again, you’ll see the message Agent stopped due to max iterations. and the fix for this is to enable the Return Intermediate Steps option, so you still get the response from that first call.
Finally, make sure you’re using a model that properly supports tool calling. The better the model, the simpler the prompt and solution you’ll need. Otherwise, none of these workarounds may fully solve the issue.
Here’s the idea I’m talking about:
You can see that the model tries to call the AI Agent Tool more than once, but it stops after the first call
Although I’m still not sure it’s a perfect solution, it seems to be a workaround idea that probably works in some cases, so try it and see if it works for your situation..
Had the same problem. I created a subworkflow that eventually returns this with a code block with a simple return:
{
“status”: “scheduled”,
“next”: “reply_to_user”
}
And it helped calling the tool only once. Also, I mentioned in the tool prompt to call it only once.
Thanks for your idea , could you provide an example?I’m facing the same problem and I’m looking for a better solution.Currently, I use a system prompt like this:
Core Directives
- Find the Tool Response: Your context contains the history and tool outputs. A
tool’s output ALWAYS follows the specific label “Tool: [{“type”: … }]”, it’s almost at the end. You MUST actively search for this label in the text. Finding this label is your signal that a tool has finished running.
- Single Execution Rule: After you find the “Tool: […”, your ONLY next step is to reply to the user. It is forbidden to re-run a tool for the same request once you have its output.
- Trust Tool Outputs: A tool’s response is FINAL.
An empty response ([],"") is a SUCCESSFUL search with zero results.
trueis a SUCCESSFUL operation.
Accept the first response and DO NOT re-run the tool.
It works most of the time, but in some cases it failes and the agent executes the same tool twice (same parameters), so I’d like to use something “harder”.



