Execution runs for a long time (confirmed), then later shows “Error in ~6–8s” / “execution was interrupted” with no error details (loop-heavy workflow)

Hi everyone,

I’m stuck debugging a strange execution behavior in n8n where I lose all failure details.

What’s happening

  • I start a workflow and it shows as Running in the Executions view.

  • I’m confident it is actually running while it’s shown as running (not just a UI glitch):

    • I can see ongoing activity consistent with the workflow still processing (e.g., external API credits being consumed / data being written / periodic side effects while it’s running).
  • The workflow is loop-heavy (iterating through many items / frequent loop cycles).

The problem

  • After a long time (sometimes 30–120 minutes), when I later open the execution entry, it shows:

    • “Error in ~6–8 seconds” (as if it only ran a few seconds)
  • When opening the execution, I often see:

    • “Can’t show data”

    • “The execution was interrupted, so the data was not saved. Try fixing the workflow and re-executing.”

  • I never get a clear final error message on-screen at the moment it fails, and because execution data isn’t saved, I can’t see:

    • which node failed

    • stack trace / error message

    • last executed node

So effectively: it runs for a long time (confirmed), then ends up as a very short “Error in Xs” with no debug info, which is extremely hard to troubleshoot.

Questions

  1. What does “execution was interrupted, so the data was not saved” typically mean in n8n (common root causes)?

  2. Are there known issues where long-running / loop-heavy executions can end up recorded as “Error in a few seconds” even though they ran much longer?

  3. What is the best way to ensure I still capture the failing node + error details (e.g., execution saving settings, error workflow with Error Trigger, log settings)?

  4. If this is related to timeouts / memory / process restarts / worker crashes, what logs/settings should I check first?

@torbenjaeger18 That “Error in ~6s” after hours is not an actual error it’s a sign your n8n process was forcefully killed, usually due to a memory leak. Over time, your workflow accumulates data in RAM, and when it exceeds available memory, the OS kills n8n (often with exit code 137).

How to confirm: Check your server logs for “Killed” or “Out of memory” messages after a crash. In Docker, run docker logs [container] --tail 50.

To troubleshoot: Enable “Save Execution Progress” in your workflow settings. This helps identify where it crashes by saving progress after each node.

The fix: Break large loops into smaller batches—process, then clear memory before the next batch. For example, process 100 items at a time, then repeat. Use a “Function” node or recursive “Execute Workflow” to implement batching.

Also, review your timeout settings to rule out simple timeouts, but an out-of-memory kill is most likely.

Once you check logs, you’ll confirm the cause. If you share your environment and data size, I can suggest more tailored batching strategies.

1 Like

Thank you very much, extremely helpful!!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.