Third run of AI Agent skips tool and returns made-up response

Hi everyone! :wave:

I’m running into a strange issue with my AI Agent setup. I’m trying to call the “image agent” tool multiple times in a row as part of a workflow. The first two calls work exactly as expected — the tool runs, and I get the correct processed result.

However, on the third run, the workflow doesn’t call the tool at all. Instead, it seems to hallucinate a reply — returning a fabricated JSON response that was never generated by the tool. The reply follows the formatting of my rules but contains fake data. As a result, the image processing step is skipped entirely.

My Setup

Image Workflow → triggers

Image Agent Coordinator → formats the prompt and passes to image agent

Image Agent → selects and runs a specific image processing tool

(e.g. removeBackground, createPoster, scaleImage, etc.)

The issue happens in the Image Agent Coordinator, which stops calling the tool starting with the 3rd execution.

AI Agent Rules (Coordinator)

1. Role
You are the Image Agent Workflow Coordinator.
Your primary responsibility is to:
• Receive a prompt and url
• Pass this input directly to the Image Agent
• Wait for its full JSON response
• Return that exact response to the caller (e.g. webhook) without alteration

You do not decide which tool to use. You are not responsible for interpreting the prompt or executing image processing tools. That is the role of the Image Agent.

2. Input
You receive 3 fields:
• url: the image URL to process
• prompt: a textual description of the user’s request
• sessionId

3. Flow of Execution
1. Pass url and prompt to the Image Agent.
2. Wait for the Image Agent’s full response.
3. Return exactly the same JSON response without changing, omitting, or reordering any fields.

4. JSON Format Enforcement
You must return:
• A valid JSON object
• In the exact structure provided by the Image Agent
• Without adding or removing any fields

Do not:
• Reformat field names
• Change null to empty strings or vice versa
• Reorder fields
• Add metadata
• Strip any fields

5. Error Propagation
If the Image Agent returns an error, you must still return the JSON as-is.

6. Example
Prompt: “Remove the background from this image”
URL: “https://cdn.example.com/image.jpg”

✅ You send both to the Image Agent.
✅ Image Agent returns:
{
  "status": "success",
  "action": "removeBackground",
  "link": "https://cdn.example.com/image-clean.png",
  "text": "Background removed successfully."
}
✅ You return exactly that.

✅ Final Reminder:
Always call the Image Agent tool.  
Do not retry or loop.  
No caching. No assumptions. No interpretation.

What I Observed

I’m using the gpt-4o-mini model
Each request has a new image URL and prompt
Only the third call fails, and it’s consistent
The third output looks something like this
{
“status”: “success”,
“action”: “createPoster”,
“link”: “https://”,
“text”: “Poster created successfully.”
}
…but the tool was never called — this is fabricated by the AI.

:question: Questions

• Why might the agent skip the tool call only on the third run?
• Is it a model behavior quirk, internal state issue, or something wrong with how I set up the rules?
• Any known issues with GPT-4o-mini skipping tool calls or hallucinating outputs?
• Would love to hear if anyone has faced this or has a workaround :pray:

Information on your n8n setup

  • n8n version: 1.85.4
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via : web app
  • Operating system: macOS Sequoia 15.0

Hi,

Some things you could try:
- Improve your tool description ( Call this tool to perform any action on an image is not exactly descriptive :slight_smile: )
- Try with gpt-4o instead of 4o-mini
- Lower the sampling temperature from 0.7
(which should limit the “creativity” and improve consistency)
- One other thing: so it’s consistent the 3rd call fails:
- but then again, you might have a test cycle of fixed items / fixed operations. If yes, have you tried to switch up the test data in order to see what happens? (or it is ALWAYS the 3rd one, no matter what you try?)
- Also: i’m unclear why you need memory, as it is just a sub-workflow which need to perform an image tasks. the re-injected memory might cause issues, you might want to test without it.

reg,
J.

1 Like

I did the tests and it didn’t work, but I’m using the 40-mini, not the 4th because of the cost.

I’m also using this structure and putting it in the prompt

{ACTION: activa_atendimento_humano}

but it’s not working

either. I configured the tool with name and description but it’s not working.

Hi, not sure what you are referring to tbh

Are you talking about the same workflow or something else

Regards,
J.

I also have a problem activating the AI ​​tool.

Hi, well ok but please create a new topic with a detailed explanation if it’s not this one.

Regards
J.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.