I’m syncing contacts from Metabase to HubSpot on a schedule.
Workflow structure (high level):
Schedule Trigger
Metabase: Get the results from a question (returns ~1912 items)
Loop Over Items (batching/throttling)
HubSpot: Search contacts (search by email)
HubSpot: Create or update a contact (upsert)
Wait (used as delay to avoid rate limiting)
Loop back to Loop Over Items until all items are processed
Expected behavior:
Process all ~1912 items from Metabase, in batches (Loop Over Items), updating HubSpot contacts.
Actual behavior:
The execution stops early and does not finish all batches/items.
The error location seems inconsistent: for the last days it stops at the Wait node, but previously it stopped at other nodes as well.
In the UI it shows an error, but I don’t get a useful error message explaining why it stops.
Example from the last run: only a small subset of items gets processed (e.g., Loop runs ~20 iterations and then stops, even though Metabase returned far more items).
Questions:
What could cause an execution to stop at a Wait node inside a loop without a clear error message?
Is Wait the wrong node for throttling inside loops (should I use a different node/config)?
Which n8n settings (execution mode / database / queue mode) can cause paused executions to fail or not resume reliably?
What is the error message (if any)?
No clear message shown in the node output. The workflow just stops mid-run (currently at Wait) and the UI indicates an error, but there’s no actionable text.
(If needed, I can add log output from server logs / container logs.)
Please share your workflow
Share the output returned by the last node
In the last runs, the node execution counters indicate that it only ran ~19–20 iterations and then stopped, even though Metabase returned far more items.
I looked into your workflow and noticed that the “Wait” node at the beginning is causing the issue - you’re waiting for the Metabase query to finish before moving on to the next step. You could try changing the timeout in the “Wait” node to a lower value, like 5000 (5 seconds), or even using a retry mechanism with a configurable backoff time to handle potential failures.
You could also consider setting up an error handler in your Metabase node to catch any errors that occur during the query and retry the operation after a certain delay. This way, you can keep trying until the query is successful without stopping the workflow entirely.
Thanks for taking a look. I think there’s a misunderstanding though: the Wait node is not at the beginning of the workflow, it’s inside the loop after the HubSpot upsert, and it’s already set to 5 seconds.
Also, the Metabase query completes successfully and returns ~1900 items before the loop starts, so the issue doesn’t seem related to Metabase timing.
The workflow stops early (often on the Wait node, previously sometimes on other nodes) without a useful error message, and only a small subset of items is processed. This makes me suspect an execution persistence/resume issue around Wait (long-running execution, loop + wait, DB/executions settings), rather than waiting for Metabase to finish.
Do you know if there are known limitations/best practices for using Wait inside Loop Over Items (or recommended alternatives like Split in Batches, throttling per batch, or HubSpot batch endpoints) to avoid executions getting stuck?
If you need to use a wait node, I would try not uisng it, and then using a loop node to lower the batches, and maybe the execution is failing because of the main somehow.
Yes. Using Wait inside Loop Over Items is a known anti-pattern in n8n and can cause executions to stall or never resume, especially in long-running workflows.
Best practices / safer alternatives:
Avoid Wait per item. It creates long-lived executions and increases the risk of stuck runs (restarts, worker limits, concurrency).
Use Split in Batches (or Loop Over Items in batch mode) and place one Wait after each batch, not per item.
Throttle per batch (e.g., 25–100 items), then delay 1–3s before continuing.
Prefer HubSpot batch endpoints where available to reduce API calls and rate-limit pressure.
Handle 429s with retries/backoff instead of fixed waits when possible.
Summary: batch the work, wait between batches (not items), and use HubSpot batch APIs to keep executions short and reliable.