Sub-workflow with wait Node Returns Incorrect Output to Main Workflow

Description:

My workflow setup is as follows:

A main workflow triggers a sub-workflow that encapsulates fixed operations.

The sub-workflow’s output should be returned to the main workflow’s execution node for downstream tasks.

Expected Behavior:
The sub-workflow final node’s output is correctly passed to the main workflow as the outputs of execution node.

Issue:
When the sub-workflow contains only simple nodes (e.g., code nodes), everything works as expected. When the sub-workflow includes a wait node (configured to use a webhook for third-party interaction), the output returned to the main workflow is the wait node’s input instead of the final node’s output.

Root Cause Investigation:
By reviewing the n8n source code, I traced the logic in workflow-execute-additional-data.ts, specifically the startExecution method:
// subworkflow either finished, or is in status waiting due to a wait node, both cases are considered successes here
if (data.finished === true || data.status === ‘waiting’) {
// Workflow did finish successfully

	activeExecutions.finalizeExecution(executionId, data);
	const returnData = WorkflowHelpers.getDataLastExecutedNodeData(data);
	return {
		executionId,
		data: returnData!.data!.main,
		waitTill: data.waitTill,
	};
}
activeExecutions.finalizeExecution(executionId, data);

This logic treats a sub-workflow in waiting state (e.g., after triggering a wait node) as “successfully completed” and returns the wait node’s input data. However:

The sub-workflow is not truly finished — it resumes after the webhook callback and executes the final node.

The final node’s output is never returned to the main workflow.

Additional Context:

The “Wait For Sub-Workflow Completion” option is enabled in the main workflow (default behavior).

This option controls the doNotWaitToFinish flag in executeWorkflow method. When doNotWaitToFinish: true, the sub-workflow directly returns [null] at the beginning of sub execution, which is unrelated to this issue.
if (options.doNotWaitToFinish) {
return { executionId, data: [null] };
}

Questions for the Community:

Why is the waiting state intentionally treated as “successful completion” instead of waiting for the sub-workflow to fully finish? The comment in the code explicitly states this is intentional, but it conflicts with the expectation that the main workflow should receive the final output of the sub-workflow.

I attempted to modify the code by removing the || data.status === ‘waiting’ condition. The sub-workflow then completed normally, but the main workflow’s execution node threw a generic error: “An error occurred”. What internal mechanisms might cause this?

Is there a supported way to ensure the main workflow receives the sub-workflow’s final output (not the wait node’s input) when using webhook-based wait nodes? This seems like a common integration scenario.

Information on your n8n setup

  • n8n version:1.82.3
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
3 Likes

No reply for long time, I have to continue investigating myself. Actually here are two workflow synchronization issues:

First Issue: When a sub-workflow contains a wait node, the parent workflow execution node receives input from the wait node instead of final results.
The normal execution flow:
Parent triggers sub-workflow → 2. Sub completes → 3. Parent updates with sub’s final execution data → 4. Parent continues.

With wait node:

  1. Parent triggers sub → 2. Sub pauses at wait node, change status to waiting, return current execution data to parent execute node → 3. Parent update with sub’s execution data, persists parent execute data → 4. Webhook resumes sub and sets parent callback which will be resolve when sub complete → 5. Sub completes without return execute data to parent → 6. Parent resumes using execute data at step 3, which miss sub execution data after wait.

My Fix:
Modified the resume callback at step 4, to merge sub-workflow’s final execution data (using execution ID) after sub completes. This ensures parent uses updated data when resuming.

at webhooks/webhook-helpers.js

startExecution(executionId, subExecutionId) {
    const fullExecutionData = await this.executionRepository.findSingleExecution(executionId, {
            includeData: true,
            unflattenData: true,
        });
    if (typeof subExecutionId !== 'undefined') {
	  // a complex merge function I developed, read subExecution data by findSingleExecution, then merge into fullExecutionData
          await this.updateParentExecutionData(fullExecutionData, subExecutionId);
        }
    const data = {
            executionMode: fullExecutionData.mode,
            executionData: fullExecutionData.data,
            workflowData: fullExecutionData.workflowData,
            projectId: project.id,
            pushRef: fullExecutionData.data.pushRef,
        };
    await this.workflowRunner.run(data, false, false, executionId);
}

Second Issue: When a sub-workflow contains single wait node but executed multiple times within a loop or includes multiple wait nodes, the parent resume before sub complete
Complex scenario:

  1. Parent triggers sub → 2. Sub pauses at wait1 → 3. Parent update with sub’s execution data, persists parent execute data → 4. Webhook resumes sub and sets parent callback which will be resolve when sub complete → 5. Sub hits wait2, prematurely triggering parent callback → 6. Timeline forked: Parent uses old data while sub continues independently.

Enhanced Fix:
Added state validation in callbacks:
Skip parent resume if sub remains in “waiting” state
Only trigger parent resume when sub reaches “completed” state

at wait-tracker.js

function executeWebhook() {
   ...
   if (parentExecution) {
       void executePromise.then(() => {
           if (executionId && activeExecutions.has(executionId)) {
               const status = activeExecutions.getStatus(executionId);
               if (status === 'waiting') {
                   return;
               }
           }
           const waitTracker = di_1.Container.get(wait_tracker_1.WaitTracker);
           // pass additional sub execution id to startExecution
           void waitTracker.startExecution(parentExecution.executionId, executionId);
       });
   }
   ...
}

Questions:
Is this state validation approach appropriate for handling multiple wait scenarios?
Are there potential edge cases this solution might miss (e.g., error handling)?
Are there more robust pattern recommendations for parent-child synchronization with async operations?
This modified implementation currently meets my requirements, but I seek expert validation regarding its robustness and potential improvements. Any guidance would be appreciated.

2 Likes

I have the exact same issue, the subworkflow with a wait and responese outlook node returns the input, not the output from the outlook node containing the human response. Running n8n on servers. I am guessing that your fix only works for local n8n, since you are modifying source code from my understanding.

I would love to figure out a way to have this working since the Outlook human in the loop tool doesn’t work because after clicking on the form to submit an answer, it simply says that there is nothing to do. If anyone has figured out another to fix this, please let me know.

1 Like

I ran into this same problem today. The output of a sub-workflow which has a wait node that is resumed by a webhook call returns the wait node’s input data, not the sub-workflow’s output data.

This seems like a pretty serious bug. I hope it can be fixed soon.

2 Likes

Yep, having the same issue since around v0.95 or so. Switched to using webhooks but they time out too fast for what I’m doing (waiting response from a user). So as of right now, there is no workaround available from what I can see unless I just make the next node a child of the sub-workflow. Not ideal.