Ordering of response in n8n api for /executions endpoint

Looking at the doc on the api reference for /executions ( API reference | n8n Docs ) we have a limit parameter. What is the ordering of the response ?
If we put limit=1, are we guaranteed to get the latest execution ( the “stoppedAt” field) ?

Not necessary for the question but to give a little context, my goal is to get the last execution for a specific workflow id, and if it is in error and more than 5 minutes in the past, relaunch it. It is because I have been having so much issues with hidden memory limits on n8n cloud (the $50/month version) with “random” crashing of the whole workspace - probably OOM but for kinda low workload ( I am making thousands of api calls to openrouter but I split it (http trigger node) in 10 workflows that only do 50 api calls at a time- without data transfer, they get their calls to make from a DB and insert results in the DB- and those calls are made in subworkflow and do not return data so it doesn’t impact the memory footprint of any of the 10 initiating workflows) . And relaunching a few times until all calls are made “resolves” the issue, and I want to automate it.

The /executions api handler calls the getExecutionsForPublicApi database function , and it looks like the records are fetched in descending order by id.

When you request workflows for a specific id, the database probably returns execution records in whatever “natural order” they happen to be stored in, so, it does not look like you could depend on this to return the most recently finished/errored workflow by limiting to a single item.

A more reliable approach would be to fetch all of the executions and then Filter and/or Sort them in the workflow.

1 Like

Thank you @hubschrauber. That seems very “wasteful” in ressources. How much RAM do you think such call will consume to fetch all executions ? The 50$/month cloud plan is * Pro-1 (10k executions): 640MiB RAM, 20 millicore CPU burstable. If it takes too much RAM away from my other workflows the whole n8n workspace crashes unfortunately.

I really don’t mean this to be flippant, but it seems like you are starting with a problem that stems from applying too few resources, and then unhappy that the solution probably requires more resources. This isn’t different than other such “economic” dilemmas where if the cost of a solution outweighs the need/justification for solving a problem, then it’s probably not the right solution.

You might want to consider trading the cost of a hosted (n8n cloud) account for the raw-compute cost of a self-hosted deployment (which would be more tunable/customizable to your specific requirements). There are a growing number of cloud/hosting companies who provide a quick setup for n8n.

Edit: Forgot to answer how much RAM etc. That cannot be estimated without knowing how many executions of the workflow would fall within the execution data retention period on cloud. If you tested it, and found that it was too much, you might be able to reduce it to something workable if you “change Save successful production executions to Do not save on your workflow.

1 Like

That’s not flippant at all and I agree. Would “self” host n8n (I actually did to make sure the crashes were due to RAM limitations) if I could but it’s not my choice to run the production on n8n cloud pro-1. I actually did the project first in pure typescript because it was faster to and then spent a lot of time trying to make it work on n8n (although part of it is because I was discovering n8n). My customer wants n8n because he thinks he can maintain it effortlessly after since it is “low code”. I don’t really think so but I have no say in the matter, so I just try to make it work as per the requirements.

Getting back to the question, this feature of “api filtering/sorting” might be a good idea for “normal n8n use cases” maybe. I bypassed this limitation by storing at workflow run time its execution id in a DB and update its status as the workflow is progressing. So I can detect later the ones that crashes.

Seeing your edit, does changing “Save successful production executions” to “Do not save” really reduces RAM in addition to disk/DB ? Not really clear on the Cloud data management | n8n Docs page it impacts RAM.

This was regarding the memory footprint required to read in all executions and then filter to find the crashed ones (and the side effect impact on other running workflows), because you asked “How much RAM do you think such call will consume to fetch all executions?”

ok, thank you

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.