I’ve been hitting the execution limit in my n8n Pro plan much earlier than expected, and I’d like to understand how the usage units are calculated.
Is one node execution equal to one logic unit?
How exactly are these units consumed during workflow runs?
How many total units are included in the Pro plan?
I want to make sure I’m optimizing my workflows correctly, but right now it feels like the limit is reached too quickly. Any insights or examples from your experience would be really helpful.
In cloud plans executions are not node based, basically one executions means that your flow runs complete one time, and only production executions count meaning published workflow runs will be counted, and subworkflows do not count.
What i would advice on how to regulate your usage is by first of all not publishing flows which you are still testing, and always keep an eye on wherever you add the schedule node as it can consume so many production executions if not defined correctly, same goes with webhooks as if not correctly placed they can eat up a lot of executions, just be mindful of where you use triggers that is really it.
One thing that helps a lot over time is defining workflow design standards, not just watching usage after the fact. For example, deciding when a process should stay in a single workflow versus being broken into separate automations, and reviewing whether each published workflow still delivers enough value for the execution cost it creates. That usually gives better long-term control than only reacting once usage has already grown.
To answer your specific questions about the n8n Pro plan execution limits:
On n8n Cloud, a “workflow execution” counts as one full run of a workflow from trigger to finish. Each root-level workflow run = 1 execution. Sub-workflows called via Execute Workflow node count separately - each one is also billed as its own execution.
For optimization tips from production experience:
Merge parallel paths before logging - avoid loops that call sub-workflows unnecessarily. Instead of calling a sub-workflow 100 times in a loop, batch the data and call it once.
Scheduled workflows add up fast - if you have 10 workflows running every 5 minutes, that’s 10 x 288 = 2,880 executions/day just from scheduling. Review which ones actually need to run that frequently.
Use webhooks instead of polling where possible - polling triggers consume executions even when there’s nothing to process.
If you’re building something that genuinely needs high volume (chatbots, real-time integrations), self-hosting on a VPS is far more cost-effective at scale. $5-10/month on Hetzner or Vultr gives you unlimited executions.
For the Pro plan specifically, n8n’s pricing page shows the included executions and you can see real-time usage in Settings > Usage. Hope this helps clarify things!
@Anshul_Namdev@tamy.santos@nguyenthieutoan This flow is designed to call leads in batches, and a webhook is triggered at the end of each call (not included in this flow). That webhook simply updates a Google Sheet. Is this flow set up correctly, or does it need to be optimized?
@Muhammad_Hamza_Azhar everything seems alright as you have done batching there is almost certain that you would not require anything else until the API you are calling has some rate limiting that comes under your batch size, for now this is good. And hopefully you got your answer here:
@Muhammad_Hamza_Azhar
Your flow should work, but it looks more complicated than it needs to be. Since you already get a webhook when each call ends, I’d remove the 15-minute wait and status polling, and rely on the webhook instead. That will be more efficient and easier to maintain.