AI Agents creating new executions without incoming triggers, randomly happening

,

Hello!!

I wanted to check if someone can help me understand this random behavior from my agent, somehow sometimes an execution automatically starts multiple other executions, with what it seems are nested agents inside, no idea why this is happening…

I attached two images:

- Correct execution with all fine, all tools used, and correct output.

  • Faulty execution sometimes appears out of nothing in the logs, without any incoming trigger, as you can see in the image.

Both images can happen without any changes to the workflow, just randomly, sometimes they go well, others crash… the faulty execution usually starts a bit after 1min. of the first valid execution (react to incoming chat message)

I’m trying to understand how can an execution can start another execution by itself without any incoming trigger, and the content of the execution also is quite weird to me, they always crash.

I have tested different n8n versions, they all behave like this randomly. Any help will be appreciated.

I’m have spend some days already troubleshooting and tracing steps… something weird is that while I’m running test and tracing each step on “editor”… it works as expected +95% of the time, sometimes the model returns something a bit weird… but still quite ok! Now, whenever I activate the workflow and tested sending messages from my mobile whatsapp, this issue arises… but the incoming message json structure is exactly the same. For me to work from editor, I pinned the exact same message from the “executions” section (active workflow logs)… in order to be 100% the same data.

Another interesting issue, the frequency in which happens got worse in the last 30 days for me, but this one it’s impossible for me to prove it, or trace it back to an specific version, I don’t have the data.

running version: Running version [email protected]

Can you show the second / green execution the same way you showed the yellow one in the first screenshot?

@jabbson Yes sure! Let me know if I understood correctly!

I see what you mean, what looks strange to me is that what you call the “correct execution” has an [END] for the input and the “faulty one” has a JSON for an input.

What was the actual input?

@jabbson
Here is the screenshot of the actual input.
The “[END]" at the chat interface is probably something related to an LLM answer at some point, but I’m not using the chat, I’m using a webhook. By the way the webhook is always the same structure, never changed… and it’s received only once, when it goes well and when it crashes as the two initial screenshots, webhook input is the same.

Just to provide more details:
-when I say “correct execution” is related to the perfect executions of the tools, and the correct message being sent to the end-user, meaning all agent-tools were correctly used…
-when I say “faulty execution“ is as you can see, the agent being invoking itself multiple times for more than 20min. always crashing the workflow, but the thing I can not understand is that the execution is created without any new input… it basically starts automatically, and can not even use a single tool… never… as the screenshot shows, it just starts a “faulty” execution leading to a crash. (without a real trigger)

Let me know what else I can share…