This discussion is about a bug in n8n where sub-workflows containing a "wait" node (webhook, Telegram, etc.) fail to return their output to the parent workflow

Describe the problem/error/question

I have 4 workflows: A, B, C, and D.

  • Workflow D: A sub-workflow containing a Telegram node in “Send and Wait for Response” mode.
  • Workflow C: A parent workflow containing an AI agent that uses Workflow D as a tool (sub-workflow tool). Everything works correctly at this level — the agent calls the tool, waits for the Telegram response, receives it, and produces the expected output.
  • Workflow A: The top-level parent workflow that calls Workflow C as a sub-workflow via an “Execute Sub-Workflow” node. This is where the error occurs: Workflow A stops and the “Execute Sub-Workflow” node throws the error: workflowResult is not iterable.

What makes this particularly confusing:
Workflow A also contains another “Execute Sub-Workflow” node (calling Workflow B), which has a similar structure — an AI agent using the same Telegram sub-workflow tool (D). That node presented the same error initially, and I was able to fix it by:

  1. Wrapping the sub-workflow call inside a Loop node
  2. Changing the execution mode from “Execute once for each item” to “Execute once with all items”

I tried the same fix on Workflow C’s node in Workflow A, but it does not resolve the issue. “Wait for Sub-Workflow Completion” is enabled.

Note on the known bug:
IIt has been reported as fixed in n8n v2.0+. However, I am running v2.14 and the issue persists for me. This seems consistent with this similar report where the bug was still observed on v2.2.1, which suggests either a regression or an incomplete fix.


What is the error message (if any)?

workflowResult is not iterable

Thrown by the “Execute Sub-Workflow” node in Workflow A when calling Workflow C.


Share the output returned by the last node

  • Workflow C executes fully and correctly on its own.
  • Workflow A stops at the “Execute Sub-Workflow” node pointing to C, with the above error.
  • The equivalent node (pointing to B, same structure) works after the Loop + “execute once with all items” workaround — but this fix does not work for C.

Information on your n8n setup

  • n8n version: 2.14
  • Database: SQLite
  • n8n EXECUTIONS_PROCESS setting: main
  • Running n8n via: Self-hosted
  • Operating system: Linux

This is a fundamental limitation with how n8n handles wait nodes in sub-workflows called synchronously. When the Execute Sub-workflow node runs Workflow C, it expects to receive an array of items back immediately. But if C calls D which contains a wait node, execution in D suspends waiting for the Telegram response. The problem is that C has already started its execution context and is waiting for D to return, but D never returns immediately, it just suspends. When n8n eventually tries to pass the result back to A, there is no result to iterate over because the chain is still suspended, so you get workflowResult is not iterable.

This is not specific to v2.14, the regression likely appeared because the way sub-workflow execution results are handled changed between versions.

The cleanest workaround is to move the wait node out of the sub-workflow chain entirely. Instead of A → C → D(wait), restructure as:

  1. A calls C with fire-and-forget (set “Wait for Sub-workflow Completion” = false in the Execute Sub-workflow node). This lets C start D without A blocking.
  2. D sends the Telegram message and uses a wait node.
  3. When the user responds, D’s webhook fires and continues. D can then call back into A via a webhook trigger or a separate notification mechanism.

If restructuring is not practical, a simpler workaround: put the Telegram send-and-wait directly in workflow C instead of delegating it to D. Remove D from the chain. This keeps the wait node at one level of nesting rather than two, which n8n handles more reliably.

The Loop node workaround works for the one-level case because of how loop contexts handle partial results, but it does not propagate through multiple sub-workflow levels cleanly.

Yeah this is still broken unfortunately, the wait node inside nested sub-workflows just doesn’t pass data back right. Your best bet is to have workflow D write its result somewhere like a database or google sheet and then have A poll for it instead of relying on the sub-workflow return, the loop workaround is hit or miss as you already found out.

Maher_AMAMOU, this is a tricky one, but you’re actually very close

The error “workflowResult is not iterable” usually means the sub-workflow is not returning data in the format n8n expects. The “Execute Sub-Workflow” node always expects an array of items, but in your case Workflow C is likely returning a single object or something undefined at some point.

Even though Workflow C works fine on its own, when it’s called from Workflow A, n8n tries to loop through the result. If the output is not an array, it throws that error. That’s why your fix worked for Workflow B, but not for C — the return structure is different.

The main thing to check is the last node in Workflow C. Make sure it always returns data like this:
return [{ json: { ...yourData } }]
and not just a plain object or empty response. If there is any path (especially after the Telegram “Send and Wait” node) where nothing is returned, this error will happen.

Also, since you’re using an AI agent with a tool (Workflow D), sometimes the agent returns a nested structure or even undefined if something fails silently. That can break the parent workflow even if it looks fine when tested alone.

A safe way to fix this is to add a Code node at the end of Workflow C to normalize the output. For example, make sure it always returns at least one item, even if something goes wrong. This keeps the parent workflow stable.

One more thing to check is execution mode. Since this worked for Workflow B, keep “Execute once with all items”, but focus more on making sure Workflow C always returns a proper array, not relying only on the loop workaround.

In short, this is less about the Loop node and more about consistent return format from Workflow C.

If you can share the last node output of Workflow C, I can help you adjust it so Workflow A stops throwing that error.

Thank you for the detailed explanation and proposed solutions — the structural limitation is now very clear to me.

However, I want to point out that setting “Wait for Sub-Workflow Completion” to false does not seem to solve the issue in my case. The Execute Sub-Workflow node still waits for the child workflow’s result, and the same error occurs regardless.

As for Solution 2, merging Workflow D directly into C is unfortunately not feasible for my project architecture.

Great hypothesis, and thank you for the thorough breakdown! I actually already checked the return format of Workflow C.

Here’s what makes me think the issue is elsewhere: the error on the “Execute Sub-Workflow” node in Workflow A triggers exactly at the moment I receive the Telegram message, meaning Workflow C’s AI agent hasn’t even finished processing yet. It’s still waiting for my Telegram reply when the parent workflow crashes.

This confirms to me that the problem is not about the return format, but about n8n trying to resolve the sub-workflow result while the chain is still suspended on the Telegram wait node.

Exactly what I ended up doing, with a few nuances worth sharing:

First, I replaced the Execute Sub-Workflow / When Executed by Another Workflow pair with an HTTP Request (parent) / Webhook (child) setup. The critical part: the HTTP Request must be set to “Respond Immediately” (if you wait for the execution result, the same error occurs. Which, by the way, further confirms this is a bug and not a misconfiguration.)

Then the flow goes like this:

  1. Workflow C (child) processes everything (AI agent + Telegram wait), and its last node is a Redis node that stores the final output under a predefined key.
  2. Back in Workflow A (parent), after the HTTP Request, a Wait node pauses execution, then a Redis GET node retrieves the result using that same predefined key.

Not perfect (the fixed wait duration is the main drawback) but it works reliably.

hopefully an official fix comes soon :crossed_fingers:

Maher_AMAMOU, you’re right, this is not a return format issue

What you’re seeing is caused by the Telegram “Send and Wait” node pausing the sub-workflow, while the parent workflow (Execute Sub-Workflow) still expects a result immediately. n8n then tries to resolve the output too early and throws “workflowResult is not iterable”.

So your conclusion is correct, the problem is that the sub-workflow is still suspended, not finished.

Your workaround with HTTP + Webhook + Redis is actually a solid approach. Using “Respond Immediately” avoids the parent waiting, and moving the final result through Redis decouples both workflows. That’s why it works more reliably.

If you want to improve it further and avoid fixed wait time, you can replace the Wait node with a polling pattern. For example, loop until Redis has the result (check every few seconds), then continue. That way you don’t depend on a fixed delay.

Another option is to design it as an event-based flow, where the child workflow triggers another webhook back to the parent once Telegram response is received. This removes the need for waiting or polling completely.

At the moment, “Execute Sub-Workflow” does not handle paused executions (like Telegram wait) very well, even in newer versions, so your workaround is aligned with how people are solving it in practice.

Until n8n handles suspended executions properly in sub-workflows, separating execution like you did is the safest approach.

1 Like

This is a known issue with sub-workflows that have wait nodes — the result format changes when returned from a wait, and the execute sub-workflow node doesn’t handle it right. The loop workaround helps mask it for some cases but not all.

Maher, you’ve actually diagnosed this perfectly. The problem is exactly what you identified: the sub-workflow is still suspended when the parent tries to resolve the result.

Your Redis + HTTP workaround is solid. The key insight is using “Respond Immediately” on the HTTP Request — this prevents the parent from waiting for an immediate result while the child is still suspended on the Telegram wait.

If you want to avoid the fixed wait duration, you could replace your Wait node with a polling pattern — loop and check Redis every few seconds until the result is there. Or go full event-based: have the child trigger a webhook back to the parent once Telegram responds. That removes waiting/polling entirely.

For now, your approach aligns with how most people solve this in practice since “Execute Sub-Workflow” doesn’t handle suspended executions properly, even in v2.0+.

1 Like